boundary 2

Tag: control

  • Poetics of Control

    Poetics of Control

    a review of Alexander R. Galloway, The Interface Effect (Polity, 2012)

    by Bradley J. Fest

    ~

    This summer marks the twenty-fifth anniversary of the original French publication of Gilles Deleuze’s seminal essay, “Postscript on the Societies of Control” (1990). A strikingly powerful short piece, “Postscript” remains, even at this late date, one of the most poignant, prescient, and concise diagnoses of life in the overdeveloped digital world of the twenty-first century and the “ultrarapid forms of apparently free-floating control that are taking over from the old disciplines.”[1] A stylistic departure from much of Deleuze’s other writing in its clarity and straightforwardness, the essay describes a general transformation from the modes of disciplinary power that Michel Foucault famously analyzed in Discipline and Punish (1975) to “societies of control.” For Deleuze, the late twentieth century is characterized by “a general breakdown of all sites of confinement—prisons, hospitals, factories, schools, the family.”[2] The institutions that were formerly able to strictly organize time and space through perpetual surveillance—thereby, according to Foucault, fabricating the modern individual subject—have become fluid and modular, “continually changing from one moment to the next.”[3] Individuals have become “dividuals,” “dissolv[ed] . . . into distributed networks of information.”[4]

    Over the past decade, media theorist Alexander R. Galloway has extensively and rigorously elaborated on Deleuze’s suggestive pronouncements, probably devoting more pages in print to thinking about the “Postscript” than has any other single writer.[5] Galloway’s most important work in this regard is his first book, Protocol: How Control Exists after Decentralization (2004). If the figure for the disciplinary society was Jeremy Bentham’s panopticon, a machine designed to induce a sense of permanent visibility in prisoners (and, by extension, the modern subject), Galloway argues that the distributed network, and particularly the distributed network we call the internet, is an apposite figure for control societies. Rhizomatic and flexible, distributed networks historically emerged as an alternative to hierarchical, rigid, centralized (and decentralized) networks. But far from being chaotic and unorganized, the protocols that organize our digital networks have created “the most highly controlled mass media hitherto known. . . . While control used to be a law of society, now it is more like a law of nature. Because of this, resisting control has become very challenging indeed.”[6] To put it another way: if in 1980 Deleuze and Félix Guattari complained that “we’re tired of trees,” Galloway and philosopher Eugene Thacker suggest that today “we’re tired of rhizomes.”[7]

    The imperative to think through the novel challenges presented by control societies and the urgent need to develop new methodologies for engaging the digital realities of the twenty-first century are at the heart of The Interface Effect (2012), the final volume in a trio of works Galloway calls Allegories of Control.[8] Guiding the various inquiries in the book is his provocative claim that “we do not yet have a critical or poetic language in which to represent the control society.”[9] This is because there is an “unrepresentability lurking within information aesthetics” (86). This claim for unrepresentability, that what occurs with digital media is not representation per se, is The Interface Effect’s most significant departure from previous media theory. Rather than rehearse familiar media ecologies, Galloway suggests that “the remediation argument (handed down from McLuhan and his followers including Kittler) is so full of holes that it is probably best to toss it wholesale” (20). The Interface Effect challenges thinking about mimesis that would place computers at the end of a line of increasingly complex modes of representation, a line extending from Plato, through Erich Auerbach, Marshall McLuhan, and Friedrich Kittler, and terminating in Richard Grusin, Jay David Bolter, and many others. Rather than continue to understand digital media in terms of remediation and representation, Galloway emphasizes the processes of computational media, suggesting that the inability to productively represent control societies stems from misunderstandings about how to critically analyze and engage with the basic materiality of computers.

    The book begins with an introduction polemically positioning Galloway’s own media theory directly against Lev Manovich’s field-defining book, The Language of New Media (2001). Contra Manovich, Galloway stresses that digital media are not objects but actions. Unlike cinema, which he calls an ontology because it attempts to bring some aspect of the phenomenal world nearer to the viewer—film, echoing Oedipa Maas’s famous phrase, “projects worlds” (11)—computers involve practices and effects (what Galloway calls an “ethic”) because they are “simply on a world . . . subjecting it to various forms of manipulation, preemption, modeling, and synthetic transformation. . . . The matter at hand is not that of coming to know a world, but rather that of how specific, abstract definitions are executed to form a world” (12, 13, 23). Or to take two other examples Galloway uses to positive effect: the difference can be understood as that between language, which describes and represents, encoding a world, versus calculus, which does or simulates doing something to the world; calculus is a “system of reasoning, an executable machine” (22). Though Galloway does more in Gaming: Essays on Algorithmic Culture (2006) to fully develop a way of analyzing computational media that privileges action over representation, The Interface Effect theoretically grounds this important distinction between mimesis and action, description and process.[10] Further, it constitutes a bold methodological step away from some of the dominant ways of thinking about digital media that simultaneously offers its readers new ways to connect media studies more firmly to politics.

    Further distinguishing himself from writers like Manovich, Galloway says that there has been a basic misunderstanding regarding media and mediation, and that the two systems are “violently unconnected” (13). Galloway demonstrates, in contrast to such thinkers as Kittler, that there is an old line of thinking about mediation that can be traced very far back and that is not dependent on thinking about media as exclusively tied to nineteenth and twentieth century communications technology:

    Doubtless certain Greek philosophers had negative views regarding hypomnesis. Yet Kittler is reckless to suggest that the Greeks had no theory of mediation. The Greeks indubitably had an intimate understanding of the physicality of transmission and message sending (Hermes). They differentiated between mediation as immanence and mediation as expression (Iris versus Hermes). They understood the mediation of poetry via the Muses and their techne. They understood the mediation of bodies through the “middle loving” Aphrodite. They even understood swarming and networked presence (in the incontinent mediating forms of the Eumenides who pursued Orestes in order to “process” him at the procès of Athena). Thus we need only look a little bit further to shed this rather vulgar, consumer-electronics view of media, and instead graduate into the deep history of media as modes of mediation. (15)

    Galloway’s point here is that the larger contemporary discussion of mediation that he is pursuing in The Interface Effect should not be restricted to merely the digital artifacts that have occasioned so much recent theoretical activity, and that there is an urgent need for deeper histories of mediation. Though the book appears to be primarily concerned with the twentieth and twenty-first century, this gesture toward the Greeks signals the important work of historicization that often distinguishes much of Galloway’s work. In “Love of the Middle” (2014), for example, which appears in the book Excommunication (2014), co-authored with Thacker and McKenzie Wark, Galloway fully develops a rigorous reading of Greek mediation, suggesting that in the Eumenides, or what the Romans called the Furies, reside a notable historical precursor for understanding the mediation of distributed networks.[11]

    In The Interface Effect these larger efforts at historicization allow Galloway to always understand “media as modes of mediation,” and consequently his big theoretical step involves claiming that “an interface is not a thing, an interface is an effect. It is always a process or a translation” (33). There are a variety of positive implications for the study of media understood as modes of mediation, as a study of interface effects. Principal amongst these are the rigorous methodological possibilities Galloway’s focus emphasizes.

    In this, methodologically and otherwise, Galloway’s work in The Interface Effect resembles and extends that of his teacher Fredric Jameson, particularly the kind of work found in The Political Unconscious (1981). Following Jameson’s emphasis on the “poetics of social forms,” Galloway’s goal is “not to reenact the interface, much less to ‘define’ it, but to identify the interface itself as historical. . . . This produces . . . a perspective on how cultural production and the socio-historical situation take form as they are interfaced together” (30). The Interface Effect firmly ties the cultural to the social, economic, historical, and political, finding in a variety of locations ways that interfaces function as allegories of control. “The social field itself constitutes a grand interface, an interface between subject and world, between surface and source, and between critique and the objects of criticism. Hence the interface is above all an allegorical device that will help us gain some perspective on culture in the age of information” (54). The power of looking at the interface as an allegorical device, as a “control allegory” (30), is demonstrated throughout the book’s relatively wide-ranging analyses of various interface effects.

    Chapter 1, “The Unworkable Interface,” historicizes some twentieth century transformations of the interface, concisely summarizing a history of mediation by moving from Norman Rockwell’s “Triple Self-Portrait” (1960), through Mad Magazine’s satirization of Rockwell, to World of Warcraft (2004-2015). Viewed from the level of the interface, with all of its nondiegetic menus and icons and the ways it erases the line between play and labor, Galloway demonstrates both here and in the last chapter that World of Warcraft is a powerful control allegory: “it is not an avant-garde image, but, nevertheless, it firmly delivers an avant-garde lesson in politics” (44).[12] Further exemplifying the importance of historicizing interfaces, Chapter 2 continues to demonstrate the value of approaching interface effects allegorically. Galloway finds “a formal similarity between the structure of ideology and the structure of software” (55), arguing that software “is an allegorical figure for the way in which . . . political and social realities are ‘resolved’ today: not through oppression or false consciousness . . . but through the ruthless rule of code” (76). Chapter 4 extends such thinking toward a masterful reading of the various mediations at play in a show such as 24 (2001-2010, 2014), arguing that 24 is political not because of its content but “because the show embodies in its formal technique the essential grammar of the control society, dominated as it is by specific network and informatic logics” (119). In short, The Interface Effect continually demonstrates the potent critical tools approaching mediation as allegory can provide, reaffirming the importance of a Jamesonian approach to cultural production in the digital age.

    Whether or not readers are convinced, however, by Galloway’s larger reworking of the field of digital media studies, his emphasis on attending to contemporary cultural artifacts as allegories of control, or his call in the book’s conclusion for a politics of “whatever being” probably depends upon their thoughts about the unrepresentability of today’s global networks in Chapter 3, “Are Some Things Unrepresentable?” His answer to the chapter’s question is, quite simply, “Yes.” Attempts to visualize the World Wide Web only result in incoherent repetition: “every map of the internet looks the same,” and as a result “no poetics is possible in this uniform aesthetic space” (85). He argues that, in the face of such an aesthetic regime, what Jacques Rancière calls a “distribution of the sensible”[13]:

    The point is not so much to call for a return to cognitive mapping, which of course is of the highest importance, but to call for a poetics as such for this mysterious new machinic space. . . . Today’s systemics have no contrary. Algorithms and other logical structures are uniquely, and perhaps not surprisingly, monolithic in their historical development. There is one game in town: a positivistic dominant of reductive, systemic efficiency and expediency. Offering a counter-aesthetic in the face of such systematicity is the first step toward building a poetics for it, a language of representability adequate to it. (99)

    There are, to my mind, two ways of responding to Galloway’s call for a poetics as such in the face of the digital realities of contemporaneity.

    On the one hand, I am tempted to agree with him. Galloway is clearly signaling his debt to some of Jameson’s more important large claims and is reviving the need “to think the impossible totality of the contemporary world system,” what Jameson once called the “technological” or “postmodern sublime.”[14] But Galloway is also signaling the importance of poesis for this activity. Not only is Jamesonian “cognitive mapping” necessary, but the totality of twenty-first century digital networks requires new imaginative activity, a counter-aesthetics commensurate with informatics. This is an immensely attractive position, at least to me, as it preserves a space for poetic, avant-garde activity, and indeed, demands that, all evidence to the contrary, the imagination still has an important role to play in the face of societies of control. (In other words, there may be some “humanities” left in the “digital humanities.”[15]) Rather than suggesting that the imagination has been utterly foreclosed by the cultural logic of late capitalism—that we can no longer imagine any other world, that it is easier to imagine the end of the world than a better one—Galloway says that there must be a reinvestment in the imagination, in poetics as such, that will allow us to better represent, understand, and intervene in societies of control (though not necessarily to imagine a better world; more on this below). Given the present landscape, how could one not be attracted to such a position?

    On the other hand, Galloway’s argument hinges on his claim that such a poetics has not emerged and, as Patrick Jagoda and others have suggested, one might merely point out that such a claim is demonstrably false.[16] Though I hope I hardly need to list some of the significant cultural products across a range of media that have appeared over the last fifteen years that critically and complexly engage with the realities of control (e.g., The Wire [2002-08]), it is not radical to suggest that art engaged with pressing contemporary concerns has appeared and will continue to appear, that there are a variety of significant artists who are attempting to understand, represent, and cope with the distributed networks of contemporaneity. One could obviously suggest Galloway’s argument is largely rhetorical, a device to get his readers to think about the different kinds of poesis control societies, distributed networks, and interfaces call for, but this blanket statement threatens to shut down some of the vibrant activity that is going on all over the world commenting upon the contemporary situation. In other words, yes we need a poetics of control, but why must the need for such a poetics hinge on the claim that there has not yet emerged “a critical or poetic language in which to represent the control society”? Is not Galloway’s own substantial, impressive, and important decade-long intellectual project proof that people have developed a critical language that is capable of representing the control society? I would certainly answer in the affirmative.

    There are some other rhetorical choices in the conclusion of The Interface Effect that, though compelling, deserve to be questioned, or at least highlighted. I am referring to Galloway’s penchant—following another one of his teachers at Duke, Michael Hardt—for invoking a Bartlebian politics, what Galloway calls “whatever being,” as an appropriate response to present problems.[17] In Hardt and Antonio Negri’s Empire (2000), in the face of the new realities of late capitalism—the multitude, the management of hybridities, the non-place of Empire, etc.—they propose that Herman Melville’s “Bartleby in his pure passivity and his refusal of any particulars presents us with a figure of generic being, being as such, being and nothing more. . . . This refusal certainly is the beginning of a liberatory politics, but it is only a beginning.”[18] Bartleby, with his famous response of “‘I would prefer not to,’”[19] has been frequently invoked by such substantial figures as Giorgio Agamben in the 1990s and Slavoj Žižek in the 2000s (following Hardt and Negri). Such thinkers have frequently theorized Bartleby’s passive negativity as a potentially radical political position, and perhaps the only one possible in the face of global economic realities.[20] (And indeed, it is easy enough to read, say, Occupy Wall Street as a Bartlebian political gesture.) Galloway’s response to the affective postfordist labor of digital networks, that “each and every day, anyone plugged into a network is performing hour after hour of unpaid micro labor” (136), is similarly to withdraw, to “demilitarize being. Stand down. Cease participating” (143).

    Like Hardt and Negri and so many others, Galloway’s “whatever being” is a response to the failures of twentieth century emancipatory politics. He writes:

    We must stress that it is not the job of politics to invent a new world. On the contrary it is the job of politics to make all these new worlds irrelevant. . . . It is time now to subtract from this world, not add to it. The challenge today is not one of political or moral imagination, for this problem was solved ages ago—kill the despots, surpass capitalism, inclusion of the excluded, equality for all of humanity, end exploitation. The world does not need new ideas. The challenge is simply to realize what we already know to be true. (138-39)

    And thus the tension of The Interface Effect is between this call for withdrawal, to work with what there is, to exploit protocological possibility, etc., and the call for a poetics of control, a poesis capable of representing control societies, which to my mind implies imagination (and thus, inevitably, something different, if not new). If there is anything wanting about the book it is its lack of clarity about how these two critical projects are connected (or indeed, if they are perhaps the same thing!). Further, it is not always clear what exactly Galloway means by “poetics” nor how a need for a poetics corresponds to the book’s emphasis on understanding mediation as process over representation, action over objects. This lack of clarity may be due in part to the fact that, as Galloway indicates in his most recent work, Laruelle: Against the Digital (2014), there is some necessary theorization that he needs to do before he can adequately address the digital head-on. As he writes in the conclusion to that book: “The goal here has not been to elucidate, promote, or disparage contemporary digital technologies, but rather to draft a simple prolegomenon for future writing on digitality and philosophy.”[21] In other words, it seems like Allegories of Control, The Exploit: A Theory of Networks (2007), and Laruelle may constitute the groundwork for an even more ambitious confrontation with the digital, one where the kinds of tensions just noted might dissolve. As such, perhaps the reinvocation of a Bartlebian politics of withdrawal at the end of The Interface Effect is merely a kind of stop-gap, a place-holder before a more coherent poetics of control can emerge (as seems to be the case for the Hardt and Negri of Empire). Although contemporary theorists frequently invoke Bartleby, he remains a rather uninspiring figure.

    These criticisms aside, however, Galloway’s conclusion of the larger project that is Allegories of Control reveals him to be a consistently accessible and powerful guide to the control society and the digital networks of the twenty-first century. If the new directions in his recent work are any indication, and Laruelle is merely a prolegomenon to future projects, then we should perhaps not despair at all about the present lack of a critical language for representing control societies.

    _____

    Bradley J. Fest teaches literature at the University of Pittsburgh. At present he is working on The Nuclear Archive: American Literature Before and After the Bomb, a book investigating the relationship between nuclear and information technology in twentieth and twenty-first century American literature. He has published articles in boundary 2, Critical Quarterly, and Studies in the Novel; and his essays have appeared in David Foster Wallace and “The Long Thing” (2014) and The Silence of Fallout (2013). The Rocking Chair, his first collection of poems, is forthcoming from Blue Sketch Press. He blogs at The Hyperarchival Parallax.

    Back to the essay
    _____

    [1] Though best-known in the Anglophone world via the translation that appeared in 1992 in October as “Postscript on the Societies of Control,” the piece appears as “Postscript on Control Societies,” in Gilles Deleuze, Negotiations: 1972-1990, trans. Martin Joughin (New York: Columbia University Press, 1995), 178. For the original French see Gilles Deleuze, “Post-scriptum sur des sociétés de contrôle,” in Pourparlers, 1972-1990 (Paris: Les Éditions de Minuit, 1990), 240-47. The essay originally appeared as “Les sociétés de contrôle,” L’Autre Journal, no. 1 (May 1990). Further references are to the Negotiations version.

    [2] Ibid.

    [3] Ibid., 179.

    [4] Alexander R. Galloway, Protocol: How Control Exists after Decentralization (Cambridge, MA: MIT Press, 2004), 12n18.

    [5] In his most recent book, Galloway even goes so far as to ask about the “Postscript”: “Could it be that Deleuze’s most lasting legacy will consist of 2,300 words from 1990?” (Alexander R. Galloway, Laruelle: Against the Digital [Minneapolis: University of Minnesota Press, 2014], 96, emphases in original). For Andrew Culp’s review of Laruelle for The b2 Review, see “From the Decision to the Digital.”

    [6] Galloway, Protocol, 147.

    [7] Gilles Deleuze and Félix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987), 15; and Alexander R. Galloway and Eugene Thacker, The Exploit: A Theory of Networks (Minneapolis: University of Minnesota Press, 2007), 153. For further discussions of networks see Alexander R. Galloway, “Networks,” in Critical Terms for Media Studies, ed. W. J. T. Mitchell and Mark B. N. Hansen (Chicago: University of Chicago Press), 280-96.

    [8] The other books in the trilogy include Protocol and Alexander R. Galloway, Gaming: Essays on Algorithmic Culture (Minneapolis: University of Minnesota Press, 2006).

    [9] Alexander R. Galloway, The Interface Effect (Malden, MA: Polity, 2012), 98. Hereafter, this work is cited parenthetically.

    [10] See especially Galloway’s masterful first chapter of Gaming, “Gamic Action, Four Moments,” 1-38. To my mind, this is one of the best primers for critically thinking about videogames, and it does much to fundamentally ground the study of videogames in action (rather than, as had previously been the case, in either ludology or narratology).

    [11] See Alexander R. Galloway, “Love of the Middle,” in Excommunication: Three Inquiries in Media and Mediation, by Alexander R. Galloway, Eugene Thacker, and McKenzie Wark (Chicago: University of Chicago Press, 2014), 25-76.

    [12] This is also something he touched on in his remarkable reading of Donald Rumsfeld’s famous “unknown unknowns.” See Alexander R. Galloway, Warcraft and Utopia,” Ctheory.net (16 February 2006). For a discussion of labor in World of Warcraft, see David Golumbia, “Games Without Play,” in “Play,” special issue, New Literary History 40, no. 1 (Winter 2009): 179-204.

    [13] See the following by Jacques Rancière: The Politics of Aesthetics: The Distribution of the Sensible, trans. Gabriel Rockhill (New York: Continuum, 2004), and “Are Some Things Unrepresentable?” in The Future of the Image, trans. Gregory Elliott (New York: Verso, 2007), 109-38.

    [14] Fredric Jameson, Postmodernism; or, the Cultural Logic of Late Capitalism (Durham, NC: Duke University Press, 1991), 38.

    [15] For Galloway’s take on the digital humanities more generally, see his “Everything Is Computational,” Los Angeles Review of Books (27 June 2013), and “The Cybernetic Hypothesis,” differences 25, no. 1 (Spring 2014): 107-31.

    [16] See Patrick Jagoda, introduction to Network Aesthetics (Chicago: University of Chicago Press, forthcoming 2015).

    [17] Galloway’s “whatever being” is derived from Giorgio Agamben, The Coming Community, trans. Michael Hardt (Minneapolis: University of Minnesota Press, 1993).

    [18] Michael Hardt and Antonio Negri, Empire (Cambridge, MA: Harvard University Press, 2000), 203, 204.

    [19] Herman Melville, “Bartleby, The Scrivener: A Story of Wall-street,” in Melville’s Short Novels, critical ed., ed. Dan McCall (New York: W. W. Norton, 2002), 10.

    [20] See Giorgio Agamben, “Bartleby, or On Contingency,” in Potentialities: Collected Essays in Philosophy, trans. and ed. Daniel Heller-Roazen (Stanford: Stanford University Press, 1999), 243-71; and see the following by Slavoj Žižek: Iraq: The Borrowed Kettle (New York: Verso, 2004), esp. 71-73, and The Parallax View (New York: Verso, 2006), esp. 381-85.

    [21] Galloway, Laruelle, 220.

  • Men (Still) Explain Technology to Me: Gender and Education Technology

    Men (Still) Explain Technology to Me: Gender and Education Technology

    By Audrey Watters
    ~

    Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)

    Men Explain Technology

    So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.

    Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.

    I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.

    And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.

    Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.

    I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.

    I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.

    The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.

    Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.

    Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.

    I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.

    I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.

    There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.

    Perhaps, yes.

    But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.

    Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.

    By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.

    And that right there is already a process of erasure, a different sort of mansplaining one might say.

    Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.

    Ada Lovelace
    Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)

    “Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)

    Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.

    A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).

    In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.

    Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.

    This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)

    • Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
    • 70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
    • 70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
    • 69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
    • 70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
    • Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
    • And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”

    This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.

    So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?

    And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.

    That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)

    Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)

    One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.

    What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,

    Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.

    How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?

    Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.

    You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.

    This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”

    Case provides some examples of templated selves:

    Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.

    As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?

    While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.

    Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?

    It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?

    The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.

    The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.

    The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

    And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.

    One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.

    That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.

    Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.

    Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • The Automatic Teacher

    The Automatic Teacher

    By Audrey Watters
    ~

    “For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926

    Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).

    It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

    Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”

    pressey_machines

    The Automatic Teacher

    Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:

    The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:

    To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.

    To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.

    The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.

    It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.

    A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:

    [youtube https://www.youtube.com/watch?v=n7OfEXWuulg?rel=0]

    Market Failure

    UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.

    In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.

    Petrina writes that,

    After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.

    ad_pressey

    The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.

    The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)

    The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.

    Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”

    But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”

    Indeed.

    Automation and Individualization

    In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that

    “Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”

    Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.

    But as Pretina argues, “the effect of automation was control and standardization.”

    The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.

    The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay