The future of education is technological. Necessarily so.
Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.
To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.
Or so the story goes.
Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.
And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.
We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”
Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.
Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.
And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).
Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.
Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning – Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)
Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.
As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:
the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.
As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalismas Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.
Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)
As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.
And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.
“I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.
Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”
The supposition [is] that higher education and schooling in general serve a democratic society by nourishing hearty citizenship.
– Richard Ohmann (2003)
What are the risks of writing in public in this digital age? Of being a “speaking” subject in the world of public cyberspace? Physical and legal risks are discussed in work such as Nancy Welch’s (2005) recounting of her student’s encounter with the police for literally posting her poems where bills or poems were not meant to be posted. Weisser recounts a “hallway conversation” about public writing as “shared work, shared successes, and, occasionally, shared commiseration” (2002, xii). Likewise, in writing about blogging in the classroom, Charles Tryon writes about the way blogging with interactions from the public provokes “conversations” about the “relationship between writing and audience,” one that can, at times, be uncomfortable (2006, 130). There is an assumption that when discussing the “risks” of writing in public here in the United States, we instructors are discussing the risks of exercising the rights of citizenship, of first amendment disagreement and discord. Yet the assumption that the speaking subject has first amendment rights, that they possess or can express citizenship, is one which nullifies the risks some students face when they write in public, especially in digital spaces where the audience can be a vast everyone. What is the position of one who writes in public literally without the possibility of citizenship? In the absence of US citizenship, their taking the position of subject, and offering testimony about their situation, protesting it as unjust can provoke not simply abuse, which is disturbing enough, but to threats of legal action against them. Public writing opens them and their families up to threats of reporting, detainment and possible deportation by the government. Given these very real risks, I question whether from a Chicanx studies pedagogy we should be advocating for and instructing our students to express their thoughts on their positions, on their lives, in public.[1] This question feels especially urgent when, given the digital turn, to write in “public” can mean a single tweet results in huge consequences, from public humiliation to the horror of doxxing. To paraphrase Eileen Medeiros, who writes about these risks in another context, “was it all worth a few articles and essays” or, to make it more contemporary, is the risk worth a few blog posts or ‘zines? (2007, 4).
This said, I was and am convinced about the power and efficacy of having students write in public, especially for Chicanx studies classrooms. Faced with the possibilities offered by the Internet and their effects on the Chicanx studies classroom, my response has been enthusiasm for the electronic, for electronic writing, of their making our discourse public. Chicanx pedagogy is, in part, based on a repudiation of top-down instruction. As a pedagogy, public writing instead advocates bringing the community into the classroom and the classroom into the community. Blogging is an effective way to do this. Especially given the relative lack of Chicanx digital voices on the ‘net, I yearn for my students to own part of the Internet, to be seen and heard. This enthusiasm for having my Chicanx studies students write for the Internet came first out of my final year of dissertation research when I “discovered” that online terms from the Chicano Movement like “Aztlán,” “La Raza” and so on were being used by reactionary racists to (re)define and revise the history of the Chicano Movement as racist and anti-Semitic and were wildly distorting the goals, philosophies, and accomplishments of revolutionary movements. More disturbing, these mis-definitions were well enough linked to appear on the first few pages of search results, inflating their importance and giving them a sense of being “truth” merely by virtue of their being oft repeated. My students’ writings, my thinking went, would change this. Their work, I imagined, would be interventions in the false discourse, changing, via Google hits, what people would find when they entered Chicanx studies terms into their browsers. Besides, I did my undergraduate degree at a university in the midwest without a Chicanx or Latinx studies department. My independent study classes in Chicanx literature were constructed from syllabi for courses I found online. I was, therefore, imagining our public writing being used by people without access to a Chicanx studies classroom to further their own educations.
Public writing, generally defined as writings for an audience beyond the professor and classroom, can be figured in a variety of ways, but descriptions, especially those in the form of learning objectives and outcomes, tend toward a focus on writing centered around social change and the fostering of citizenship. This concept of “citizenship” is often repeated in composition studies as public writing is discussed as advocacy, as service, as an expression of active citizenship. Indeed the public writer has been figured by theorists as expressions of “citizenship” and an exercise in and demonstration of first amendment rights. Public writing is presented as being, as Christian Weisser wrote, the “discourse of public life,” further writing of his pride in being “a citizen in a self-reforming constitutional democracy” (xiv). Public writing is presented as nurturing citizenship and therefore we are encouraged to foster it in our classrooms, especially in the teaching of writing. Weisser also writes of the teaching of public writing as a “shared” classroom experience, sometimes including hardships, between students and instructors.
However, this discussion of “citizenship” and the idea of creating it through teaching to me rather disturbingly echoes the idea of assimilation to the dominant culture, an idea that Chicana/o studies pedagogy resists (Perez 1993, 276). Rather than a somewhat nationalistic goal of creating and fostering “citizenship” Chicana/o studies, especially since the 1980s publication and adoption of Gloria Anzaldúa’s Borderlands, has been for a discourse that “explains the social conditions of subjects with hybrid identities” (Elenes 1997, 359). These hybrid identities and the assumption of the position of subjecthood by those who resist the idea of nation is fraught, especially when combined with public writing. As Anzaldúa writes, “[w]ild tongues can’t be tamed, they can only be cut out” (1987, 76). The responses to Chicanx and Latinx students speaking or writing their truth can be demands for their silence.
My students and my use of public writing via blogging and Twitter was productive through upper division classes I taught on Latina coming of age stories, Chicana feminisms and Chicana/o gothic literature. After four courses taught using blogs created on my WordPress multisite installation with author accounts created for each student, I felt that I had the blogging with students / writing in public / student archiving thing down. My students had always had the option to write pseudonymously, but most had chosen to write under their own names, wanting to create a public digital identity. The blogs were on my domain and identified with their specific university and course. had been contacted by authors (and, in one case, an author’s agent), filmmakers and artists, and other bloggers had linked to our work. My students and I could see we had a small but steady traffic of people accessing student writing with their work being read and seen and, on a few topics, our class pages were on the first pages of a Google search. Therefore, when I was scheduled to teach a broader “Introduction to Chicana/o Studies” course, I decide to apply the same structure: students publicly blogging their writings on a common course blog on issues related to Chicanx studies, to this one hundred level survey course. Although, in keeping with my specialization, the course was humanities heavy with a focus on history, literature and visual art, the syllabus also included a significant amount of work in social science, especially sociology and political science, forming the foundations of Chicanx studies theory. The course engaged a number of themes related to Chicanx social and political identity, focusing a significant amount of work on communities and migrations. The demographics of the course were mixed. In the thirty student class, about half identified as Latina/o. The rest were largely white American, with several European international students.
As we read about migrations, studying and discussing the politics behind both the immigrant rights May Day marches in Los Angeles and the generations of migrations back and forth across the border, movements of people which long pre-dated the border wall, we also discussed the contemporary protest writings and art creations by the UndocuQueer Movement. In the course of class discussion, sometimes in response to comments their classmates were making that left them feeling that undocumented people were being stereotyped, several students self-disclosed that they were either the children of undocumented migrants or were undocumented themselves. These students discussed their experience of not being citizens of the country they had lived in since young childhood, the fear of deportation they felt for themselves or their parents, and its effect on them. The students also spoke of their hopes for a future in which they, and / or their families, could apply for and receive legal status, become citizens. This self-disclosure and recounting of personal stories had, as had been my experience in previous courses, a significant effect on the other students in the class, especially for those who had never considered the privileges their legal status afforded them. In the process the undocumented students became witnesses and experts, giving testimony. They made it clear they felt empowered by giving voice to their experience and seeing that their disclosures changed the minds of some of their classmates on who was undocumented and what they looked like.
After seeing the effect the testimonies had in changing the attitudes of their classmates, my undocumented students, especially one who had strongly identified with the UndocuQueer movement (in one case, the student had already participated in demonstrations), began to blog about their experiences, taking issue with stereotypes of migrants and discussing the pain reading or hearing a term like “illegals” could cause. Drawing on the course-assigned examples of writers Anzaldúa and Cherríe Moraga, they used their experiences, their physical bodies, as both evidence and metaphor of the undocumented state of being in-between, not belonging fully to any country or nation. They also wrote of their feelings of invisibility on a majority white campus where equal rights of citizenship were assumed and taken for granted. Their writing was raw and powerful, challenging, passionate and, at times, angry. These student blog posts seemed the classic case of students finding their voices. As an instructor, I was pleased with their work and gave them positive feedback, as did many of their classmates. Yet as their instructor, I was focused on the pedagogy and their learning outcomes relative to the course and had not fully considered the risk they were taking writing their truth in public.
As part of being instructor and WordPress administrator, I was also moderating comments to the blog. The settings had the blog open to public comments, with the first from any email address being hand moderated in order to prevent spamming. However, for the most part, unless an author we were reading had been alerted via Twitter, comments were between and among students in the course, which gave the course blog the feeling of being an extension of the classroom community, an illusion of privacy and intimacy. Due to this closeness, the fact the posts and comments were all coming from class members, the students and I largely lost sight of the fact we were writing in public, as the space came to feel private. This illusion of privacy was shattered when I got a comment for moderation from what turned out to be a troll demanding “illegals” be deported. Although it was not posted, what I read was an attack on one of my students, hinting the poster had done (or would do) enough digging to identify the student and their family. Not only was the comment was abusive, the commenter claimed to have reported my student to ICE.
I was reminded of the comment and the violent anger directed at undocumented students, however worthy they might try and prove themselves, again in June 2016 when Mayte Lara Ibarra, an honors high school student in Texas, tweeted her celebration of her status as valedictorian, her high GPA, her honors at graduation, her scholarship to the University of Texas and her undocumented status. While she received many messages of support, she felt forced to remove her tweet due to abuse and threats of deportation by users who claimed to have reported her and her family to Immigration and Customs Enforcement (ICE).
When I received this comment for moderation, my first response was to go through and change the status of the blog posts testifying about being undocumented to “drafts” and then to contact the students who had self-disclosed their status to let them know about the comment and the threat. I feared for my students and their families. Had I encouraged behavior–public writing–that made them vulnerable? I wondered whether I should I go to my chair for advice. Guilty self-interest was also present. At the time I was an adjunct instructor at this university, hired semester-to-semester to teach individual classes. How would my chair, the department, the university feel about my having put my students at risk to write for a blog on my own domain? Suddenly the “walls” set up by Blackboard, the university’s learning management software, that I had dismissed for being “closed,” looked appealing as I wondered how to manage this threat. Much of the discourse around public writing for the classroom discusses “risk,” but whose risk are we talking about, and how much of it can students take, and, as their instructor, what sort of risks can I be responsible for allowing them to take? Nancy Welch discusses the “attention toward working with students on public writing” as an expression of our belief as instructors that this writing “can matter in tangible ways” (2005, 474), but I questioned whether it could matter enough to be worth tangible risk to my students’ and their families physical bodies at the hands of a nation-state that has detained and deported more than 2.5 million people since 2009 (Gonzalez-Barrera, and Krogstad 2014). While some of the students in this class qualified for Deferred Action for Childhood Arrivals (DACA), giving them something of a shield, their parents, and other members of their families did not all have this protection.
By contrast, perhaps not surprisingly, my students, all of them first and second year students, felt no risk, or at least were sure they were willing to take the risk associated with the public writing. They did not want their writing taken down or hidden. My students felt they were part of a movement, a moment, to expressly leave the shadows. One even argued that the abusive comment should be posted so they could engage with its author. We discussed the risks. Initially I wanted them to be able to make the choice themselves, did not want to take their voice or power from them. Yet that was not true—what I wanted was for them to choose to take the writing down and absolve me of the responsibility for the danger in which my assignments had placed them. On the other hand though, as I explained to them, the power and responsibility rested with me. I could not conscience putting them at risk on a domain I owned, for doing work I had assigned. They agreed, albeit reluctantly. What I find most shameful in this, it was not their own risk, but their understanding mine, of my position in the department and university, that made them agree I needed to take their writing down. We made their posts password protected, shared the password with the other students for the duration of the class, and the course ended uneasily in our mutual discomfort. Nothing was comfortably resolved at this meeting of immigration law with my students’ bodies and their public writing. At the end of the course, after notifying them so they could save their writing if they wished, I did something I had never done before. I removed the students’—all of the students’—blogging from the Internet by archiving the course blog and removing it from public view.
As I began to process and analyze what had happened, I wondered what could be done differently. Was there a way to allow my students to write in public yet somehow shield them from these risks? After I presented and discussed this experience at HASTAC in 2015, I was approached with a number of possible solutions, some of which would help. Very generously, one was to allow my next course blog to be on the HASTAC site where commenting requires registration. This was a partial solution that would protect against trolling, but I questioned whether it could it protect my students from identification, from them and their families being reported to the authorities. The answer was no, it could not.
In 2011, Amy Wan examined traced and problematized the idea of creating citizens and expressing citizenship through the teaching of literacy, a concept which she traces through composition pedagogy, especially as it is expressed on syllabi and through learning objectives. The pedagogy on public writing is imbued with the assumption of citizenship with the production of citizens as a goal of public writing. Taken this way, public writing becomes a duty. Yet there is a problem with this objective of producing citizens and this desire for citizenship when it comes to students in our classes who lack legal citizenship. Anthropology in the 1990s tried to work around and give dignity to those without “full” citizenship by presenting the idea of “cultural citizenship” as a way to refer to shared values of belonging among people without legal citizenship. This was done as a way of trying to de-marginalize the marginalized and reimagine citizenship so no one’s status was second class (Rosaldo 1994, 402). But the situation of undocumented people belies this distinction, however noble and well rooted in social justice its intention. To be undocumented in the United States is to be dispossessed not only of the rights of citizenship, but to have the exercise of either the rights or responsibilities of citizenship through public speaking or writing be taken as incitement against the nation state, with some citizens viewing it as a personal assault.
This problem of the exercise of rights being seen as incitement is demonstrated by the way the display of the Mexican flag at protests for immigrant rights is seen as a rejection of the United States and refusal of US citizenship, despite the protests themselves being demands or pleas for the creation of a citizenship path. The mere display of Mexico’s flag is read as a provocation, an action which, even when done by citizens, destabilizes citizenship, seems to remove protester’s first amendment rights, and prompts cries that they should “Go back to Mexico,” or, more recently, for the government to “Build a wall.” Latinxs displaying national flags are accused of wanting to conquer (or reconquer) the southwest, reclaiming it from the United States for Mexico. This anxiety about being “conquered” by the growing Latinx population is, perhaps, displaying an anxiety that the southwestern states (what Chicanxs call Aztlán) are not so much a stable part of the conquered body, but an expression of how the idea of “nation” is itself unstable within the US borders. When a non-citizen, a subject sin papeles, writes about the experience of being undocumented, they are faced with a backlash of those who believe their existence, if they are allowed existence in the United States at all, is one without rights, without voice. Any attempt to give voice to their position brings overt threats of government action against their tenuous existence in the US, however strong their cultural ties to the United States. My students writing in public about their undocumented status, are reminded that their bodies are not citizens and, that the right to free speech, the right to write one’s truth in public is one given to citizen subjects.
This has left me with a paradox. My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives. From a practical point of view I could set up stricter anonymity so their identities are better shielded. I could have them create their own blogs, thus rather passing the responsibility to them to protect themselves. Or I could make the writing “public” only in the sense of it being public in the space of the classroom by using learning management software to keep it, them, behind a protective wall.
_____
Annemarie Perez is an Assistant Professor of Interdisciplinary Studies at California State University Dominguez Hills. Her area specialty is Latina/o literature and culture, with a focus on Chicana feminist writer-editors from 1965-to the present, and digital humanities and digital pedagogy and their intersections and divisions within ethnic and cultural studies. She is writing a book on Chicana feminist editorship using digital research to perform close readings across multiple editions and versions of articles and essays..
[*]This article is an outgrowth of a paper presented at HASTAC 2015 for a session titled: DH: Affordances and Limits of Post/Anti/Decolonial and Indigenous Digital Humanities. The other panel presenters were: Roopika Risam (moderator), Siobhan Senier, Micha Cárdenas and Dhanashree Thorat.
_____
Notes
[1] “Chicanx” is a gender neutral, sometimes contested, term of self-identification. I use it to mean someone of Mexican origin, raised in the United States, identifying with a politic of resistance to mainstream US hegemony and an identification with indigenous American cultures.
_____
Works Cited
Anzaldua, Gloria. 1987. Borderlands/La Frontera: The New Mestiza. San Francisco: Aunt Lute Books.
Elenes, C. Alejandra. 1997. “Reclaiming The Borderlands: Chicana/a Identity, Difference, and Critical Pedagogy.” Educational Theory 47:3. 359-75.
Moraga, Cherríe. 1983. Loving in the War Years: Lo Que Nunca Pasó Por Sus Labios. Boston, MA: South End Press.
Ohmann, Richard. 2003. Politics of Knowledge: The Commercialization of the University, the Professions, and Print Culture. Middleton, CT: Wesleyan University Press.
Perez, Laura. 1993. “Opposition and the Education of Chicana/os,” Race Identity and Representation in Education, ed. Cameron McCarthy and Warren Chrichlow. New York: Routledge.
Rosaldo, Renato. 1994 “Cultural Citizenship and Educational Democracy.” Cultural Anthropology 9:3. 402-411.
Tryon, Charles. 2006. “Writing and Citizenship: Using Blogs to Teach First-Year Composition.” Pedagogy 6:1. 128-132.
Wan, Amy J. 2011. “In the Name of Citizenship: The Writing Classroom and the Promise of Citizenship.” College English 74. 28-49.
Weisser, Christian R. 2002. Moving Beyond Academic Discourse: Composition Studies and the Public Sphere. Carbondale: Southern Illinois University Press.
Welch, Nancy. 2005. “Living Room: Teaching Public Writing in a Post-Publicity Era.” College Composition and Communication 56:3. 470-492.
“Human creativity and human capacity is limitless,” said the Bangladeshi economist Muhammad Yunus to a darkened room full of rapt Austrian elites. The setting was TEDx Vienna, and Yunus’s address bore all the trademark features of TED’s missionary version of technocratic idealism. “We believe passionately in the power of ideas to change attitudes, lives and, ultimately, the world,” goes the TED mission statement, and this philosophy is manifest in the familiar form of Yunus’s talk (TED.com). The lighting was dramatic, the stage sparse, and the speaker alone on stage, with only his transformative ideas for company. The speech ends with the zealous technophilia that, along with the minimalist stagecraft and quaint faith in the old-fashioned power of lectures, defines this peculiar genre. “This is the age where we all have this capacity of technology,” Yunus declares: “The question is, do we have the methodology to use these capacities to address these problems?… The creativity of human beings has to be challenged to address the problems we have made for ourselves. If we do that, we can create a whole new world—we can create a whole new civilization” (Yunus 2012). Yunus’s conviction that now, finally and for the first time, we can solve the world’s most intractable problems, is not itself new. Instead, what TED Talks like this offer is a new twist on the idea of progress we have inherited from the nineteenth century. And with his particular focus on the global South, Yunus riffs on a form of that old faith, which might seem like a relic of the twentieth: “development.” What is new, then, about Yunus’s articulation of these old faiths? It comes from the TED Talk’s combination of prophetic individualism and technophilia: this is the ideology of “innovation.”
“Innovation”: a ubiquitous word with a slippery meaning. “An innovation is a novelty that sticks,” writes Michael North in Novelty: A History of the New, pointing out the basic ontological problem of the word: if it sticks, it ceases to be a novelty. “Innovation, defined as a widely accepted change,” he writes, “thus turns out to be the enemy of the new, even as it stands for the necessity of the new” (North 2013, 4). Originally a pejorative term for religious heresy, in its common use today “innovation” is used a synonym for what would have once been called, especially in America, “futurity” or “progress.” In a policy paper entitled “A Strategy for American Innovation,” then-President Barack Obama described innovation as an American quality, in which the blessings of Providence are revealed no longer by the acquisition of territory, but rather by the accumulation of knowledge and technologies: “America has long been a nation of innovators. American scientists, engineers and entrepreneurs invented the microchip, created the Internet, invented the smartphone, started the revolution in biotechnology, and sent astronauts to the Moon. And America is just getting started” (National Economic Council and Office of Science and Technology Policy 2015, 10).
In the Obama administration’s usage, we can see several of the common features of innovation as an economic ideology, some of which are familiar to students of American exceptionalism. First, it is benevolent. Second, it is always “just getting started,” a character of newness constantly being renewed. Third, like “progress” and “development” have been, innovation is a universal, benevolent abstraction made manifest through material, economic accomplishments. But even more than “progress,” which could refer to political and social accomplishments like universal suffrage or the polio vaccine, or “development,” which has had communist and social democratic variants, innovation is inextricable from the privatized market that animates it. For this reason, Obama can treat the state-sponsored moon landing and the iPhone as equivalent achievements. Finally, even if it belongs to the nation, the capacity for “innovation” really resides in the self. Hence Yunus’s faith in “creativity,” and Obama’s emphasis on “innovators,” the protagonists of this heroic drama, rather than the drama itself.
This essay explores the individualistic, market-based ideology of “innovation” as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called “development.” I am referring principally to projects that often go under the name of “social innovation” (or, relatedly, “social entrepreneurship”), which Stanford University’s Business School defines as “a novel solution to a social problem that is more effective, efficient, sustainable, or just than current solutions” (Stanford Graduate School of Business). “Social innovation” often advertises itself as “market-based solutions to poverty,” proceeding from the conviction that it is exclusion from the market, rather than the opposite, that causes poverty. The practices grouped under this broad umbrella include projects as different the micro-lending banks, for which Yunus shared the 2006 Nobel Peace Prize; smokeless, cell-phone charging cookstoves for South Asia’s rural peasantry; latrines that turn urine into electricity, for use in rural villages without running water; and the edtech academic and TED honoree Sugata Mitra’s “self-organized learning environment” (SOLE), which appears to consist mostly of giving internet-enabled laptops to poor children and calling it a day.
The discourse of social innovation is a theory about economic process and also a story of the (first-world) self. The ideal innovator that emerges from the examples to follow is a flexible, socially autonomous individual, whose creativity and prophetic vision, nurtured by the market, can refashion social inequalities as discrete “problems” that simply await new solutions. Guided by a faith in the market but also shaped by the austerity that has slashed the budgets of humanitarian and development institutions worldwide, social innovation ideology marks a retreat from the social vision of development. Crucially, the ideologues of innovation also answer a post-development critique of Western arrogance with a generous, even democratic spirit. That is, one of the reasons that “innovation” has come to supersede “development” in the vocabulary of many humanitarian and foreign aid agencies is that innovation ideology’s emphasis on individual agency serves as a response to the legitimate charges of condescension and elitism long directed at Euro-American development agencies. But compromising the social vision of development also means jettisoning the ideal of global equality that, however deluded, dishonest, or self-serving it was, also came with it. This brings us to a critical feature of innovation thinking that is often disguised by the enthusiasm of its tech-economy evangelizers: it is in fact a pessimistic ideal of social change. The ideology of innovation, with its emphasis on processes rather than outcomes, and individual brilliance over social structures, asks us to accommodate global inequality, rather than challenge it. It is a kind of idealism, therefore, well suited to our dispiriting neoliberal moment, where the sense of possibility seems to have shrunk.
My objective is not to evaluate these efforts individually, nor even to criticize their practical usefulness as solution-oriented projects (not all of them, anyway). Indeed, in response to the difficult, persistent question, “What is the alternative?” it is easy, and not terribly helpful, to simply answer “world socialism,” or at least “import-substitution industrialization.” My objective is perhaps more modest: to define the ideology of “innovation” that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.
An Orthodoxy of Unorthodoxy: Innovation, Revolution, and Salvation
Given the immodesty of the innovator archetype, it may seem odd that innovation ideology could be considered pessimistic. On its own terms, of course, it is not; but when measured against the utopian ambitions and rhetoric of many “social innovators” and technology evangelists, their actual prescriptions appear comparatively paltry. Human creativity is boundless, and everyone can be an innovator, says Yunus; this is the good news. The bad news, unfortunately, is that not everyone can have indoor plumbing or public lighting. Consider the “pee-powered toilet” sponsored by the Gates Foundation. The outcome of inadequate sewerage in the underdeveloped world has not been changed; only the process of its provision has been innovated (Smithers 2015). This combination of evangelical enthusiasm and piecemeal accommodation becomes clearer, however, when we excavate innovation’s tangled history, which by necessity, the word seems at first glance to lack entirely.
Figure 1. A demonstration toilet, capable of powering a light, or even a mobile phone, at the University of the West of England (photograph: UWE Bristol)
For most of its history, the word has been synonymous with false prophecy and dissent: initially, it was linked to deceitful promises of deliverance, either from divine judgment or more temporal forms of punishment. For centuries, this was the most common usage of this term. The charge of innovation warned against either the possibility or the wisdom of remaking the world, and disciplined those “fickle changelings and poor discontents,” as the King says in Shakespeare’s Henry IV, grasping at “hurly-burly innovation.” Religious and political leaders tarred self-styled prophets or rebels as heretical “innovators.” In his 1634 Institution of the Christian Religion, for example, John Calvin warned that “a desire to innovate all things without punishment moveth troublesome men” (Calvin 1763, 716). Calvin’s notion that innovation was both a political and theological error reflected, of course, his own jealously kept share of temporal and spiritual authority. For Thomas Hobbes, “innovators” were venal conspirators, and innovation a “trumpet of war and sedition.” Distinguishing men from bees—which Aristotle, Hobbes says, wrongly considers a political animal like humans—Hobbes laments the “contestation of honour and preferment” that plagues non-apiary forms of sociality. Bees only “talk” when and how they have to; men and women, by contrast, chatter away in their vanity and ambition (Hobbes 1949, 65-67). The “innovators” of revolutionary Paris, Edmund Burke thundered later, “leave nothing unrent, unrifled, unravaged, or unpolluted with the slime of their filthy offal” (1798, 316-17). Innovation, like its close relative “revolution,” was upheaval, destruction, the reversal of the right order of things.
Figure 2: The Innovation Tango, in The Evening World
As Godin (2015) shows in his history of the concept in Europe, in the late nineteenth century “innovation” began to be recuperated as an instrumental force in the world, which was key to its transformation into the affirmative concept we know now. Francis Bacon, the philosopher and Lord Chancellor under King James I, was what we might call an “early adopter” of this new positive instrumental meaning. How, he asked, could Britons be so reverent of custom and so suspicious of “innovation,” when their Anglican faith was itself an innovation? (Bacon 1844, 32). Instead of being an act of sudden renting, rifling, and heretical ravaging, “innovation” became a process of patient material improvement. By the turn of the last century, the word had mostly lost its heretical associations. In fact, “innovation” was far enough removed from wickedness or malice in 1914 that the dance instructor Vernon Castle invented a modest American version of the tango that year and named it “the Innovation.” The partners never touched each other in this chaste improvement upon the Argentine dance. “It is the ideal dance for icebergs, surgeons in antiseptic raiment and militant moralists,” wrote Marguerite Marshall (1914), a thoroughly unimpressed dance critic in the New YorkEvening World. “Innovation” was then beginning to assume its common contemporary form in commercial advertising and economics, as a synonym for a broadly appealing, unthreatening modification of an existing product.
Two years earlier, the Austrian-born economist Joseph Schumpeter published his landmark text The Theory of Economic Development, where he first used “innovation” to describe the function of the “entrepreneur” in economic history (1934, 74). For Schumpeter, it was in the innovation process that capitalism’s tendency towards tumult and creative transformation could be seen. He understood innovation historically, as a process of economic transformation, but he also singled out an innovator responsible for driving the process. In his 1942 book Capitalism, Socialism, and Democracy, Schumpeter returned to the idea in the midst of war and the threat of socialism, which gave the concept a new urgency. To innovate, he wrote was “to reform or revolutionize the pattern of production by exploiting an invention or, more generally, an untried technological possibility for producing a new commodity or producing an old one in a new way, by opening up a new source of supply of materials or a new outlet for products, by reorganizing an industry and so on” (Schumpeter 2003, 132). As Schumpeter goes on to acknowledge, this transformative process is hard to quantify or professionalize. The elusiveness of his theory of innovation comes from a central paradox in his own definition of the word: it is both a world-historical force and a quality of personal agency, both a material process and a moral characteristic. It was a historical process embodied in heroic individuals he called “New Men,” and exemplified in non-commercial examples, like the “expressionist liquidation of the object” in painting (126). To innovate was also to do, at the local level of the production process, what Marx and Engels credit the bourgeoisie as a class for accomplishing historically: revolutionizing the means of production, sweeping away what is old before it can ossify. Schumpeter told a different version of this story, though. For Marx, capitalist accumulation is a dialectical historical process, but what Schumpeter called innovation was a drama driven by a particular protagonist: the entrepreneur.
In a sympathetic 1943 essay about Schumpeter theory of innovation, the Marxist economist Paul Sweezy criticized the centrality Schumpeter gave to individual agency. Sweezy’s interest in the concept is unsurprising, given how Schumpeter’s treatment of capitalism as a dynamic but destructive historical force draws upon Marx’s own. It is therefore not “innovation” as a process to which Sweezy objects, but the mythologized figure of the entrepreneurial “innovator,” the social type driving the process. Rather than a free agent, powering the economy’s inexorable progress, “we may instead regard the typical innovator as the tool of the social relations in which he is enmeshed and which force him to innovate on pain of elimination,” he writes (Sweezy 1943, 96). In other words, it is capital accumulation, not the entrepreneurial function, and certainly not some transcendent ideal of creativity and genius, that drives innovation. And while the innovator (the successful one, anyway) might achieve a pantomime of freedom within the market, for Sweezy this agency is always provisional, since innovation is a conditional economic practice of historically constituted subjects in a volatile and pitiless market, not a moral quality of human beings. Of course, Sweezy’s critique has not won the day. Instead, a particularly heroic version of the Schumpeterian sense of innovation as a human, moral quality liberated by the turbulence of capitalist markets is a mainstream feature of institutional life. An entire genre of business literature exists to teach the techniques of “managing creativity and innovation in the workplace” (The Institute of Leadership and Management 2007), to uncover the “map of innovation” (O’Connor and Brown 2003), to nurture the “art of innovation” (Kelley 2001), to close the “circle of innovation” (Peters 1999), to collect the recipes in “the innovator’s cookbook” (Johnson 2011), to give you the secrets of “the sorcerers and their apprentices” (Moss 2011)—business writers leave virtually no hackneyed metaphor for entrepreneurial creativity, from the domestic to the occult, untouched.
As its contemporary proliferation shows, innovation has never quite lost its association with redemption and salvation, even if it is no longer used to signify their false promises. As Lepore (2014) has argued about its close cousin, “disruption,” innovation can be thought of as a secular discourse of economic and personal deliverance. Even as the concept became rehabilitated as procedural, its deviant and heretical connotations were common well into the twentieth century, when Emma Goldman (2000) proudly and defiantly described anarchy as an “uncompromising innovator” that enraged the princes and oligarchs of the world. Its seeming optimism, which is inseparable from the disasters from which it promises to deliver us, is thus best considered as a response to a host of persistent anxieties of twenty-first-century life: economic crisis, violence and war, political polarization, and ecological collapse. Yet the word has come to describe the reinvention or recalibration of processes, whether algorithmic, manufacturing, marketing, or otherwise. Indeed, even Schumpeter regarded the entrepreneurial function as basically technocratic. As he put it in one essay, “it consists in getting things done” (Schumpeter 1941, 151).[1] However, as the book titles above make clear, the entrepreneurial function is also a romance. If capitalism was to survive and thrive, Schumpeter suggested, it needed to do more than produce great fortunes: it had to excite the imagination. Otherwise, it would simply calcify into the very routines it was charged with overthrowing. Innovation discourse today remains, paradoxically, both procedural and prophetic. The former meaning lends innovation discourse its piecemeal, solution-oriented accommodation to inequality. In this latter sense, though, the word retains some of the heretical rebelliousness of its origins. We are familiar with the lionization of the tech CEO as a non-confirming or “disruptive” visionary, who sets out to “move fast and break things,” as the famous Facebook motto went. The archetypal Silicon Valley innovator is forward-looking and rebellious, regardless of how we might characterize the results of his or her innovation—a social network, a data mining scheme, or Uber-for-whatever. The dissenting meaning of innovation is at play in the case of social innovation, as well, given its aim to address social inequalities in significant new ways. So, in spite of innovation’s implicit bias towards the new, the history and present-day use of the word remind us that its present-day meaning is seeded with its older ones. Innovation’s new secular, instrumental meaning is therefore not a break with its older, prohibited, religious connotation, but an embellishment of it: what is described here is a spirit, an ideal, an ideological rescrambling of the word’s older heterodox meaning to suit a new orthodoxy.
The Innovation of Underdevelopment: From Exploitation to Exclusion
In his 1949 inaugural address, which is often credited with popularizing the concept of “development,” Harry Truman called for “a bold new program for making the benefits of our scientific advances and industrial progress available for the improvement and growth of underdeveloped areas” (Truman 1949).[2] “Development” in U.S. modernization theory was defined, writes Nils Gilman, by “progress in technology, military and bureaucratic institutions, and the political and social structure” (2003, 3). It was a post-colonial version of progress that defined itself as universal and placeless; all underdeveloped societies could follow a similar path. As Kristin Ross argues, development in the vein of post-war modernization theory anticipated a future “spatial and temporal convergence” (1996, 11-12). Emerging in the collapse of European colonialism, the concept’s positive value was that it positioned the whole world, south and north, as capable of the same level of social and technical achievement. As Ross suggests, however, the future “convergence” that development anticipates is a kind of Euro-American ego-ideal—the rest of the world’s brightest possible future resembled the present of the United States or western Europe. As Gilman puts it, the modernity development looked forward to was “an abstract version of what postwar American liberals wished their country to be.”
Emerging as it did in the decline, and then in the wake, of Europe’s African, Asian, and American empires, mainstream mid-century writing on development tread carefully around the issue of exploitation. Gunnar Myrdal, for example, was careful to distinguish the “dynamic” term “underdeveloped” from its predecessor, “backwards” (1957, 7). Rather than view the underdeveloped as static wards of more “advanced” metropolitan countries, in other words, the preference was to view all peoples as capable of historical dynamism, even if they occupied different stages on a singular timeline. Popularizers of modernization theory like Walter Rostow described development as a historical stage that could be measured by certain material benchmarks, like per-capita car ownership. But it also required immaterial, subjective cultural achievements, as Josefina Saldaña-Portillo, Jorge Larrain, and Molly Geidel have pointed out. In his well-known Stages of Economic Growth, Rostow emphasized how achieving modernity required the acquisition of what he called “attitudes,” such as a “Newtonian” worldview and an acclimation to “a life of change and specialized function” (1965, 26). His emphasis on cultural attributes—prerequisites for starting development that are also consequences of achieving it—is an example of the development concept’s circular, often self-contradictory meanings. “Development” was both a process and its end point—a nation undergoes development in order to achieve development, something Cowen and Shenton call the “old utilitarian tautology of development” (1996, 4), in which a precondition for achieving development would appear to be its presence at the outset.
This tautology eventually circles back to what Nustad (2007, 40) calls the lingering colonial relationship of trusteeship, the original implication of colonial “development.” For post-colonial critics of developmentalism the very notion of “development” as a process unfolding in time is inseparable from this colonial relation, given the explicit or implicit Euro-American telos of most, if not all, development models. Where modernization theorists “naturalized development’s emergence into a series of discrete stages,” Saldaña-Portillo (2003, 27) writes, the Marxist economists and historians grouped loosely under the heading of “dependency theory” spatialized global inequality, using a model of “core” and “periphery” economies to counter the model of “traditional” and “modern” ones. Two such theorists, Andre Gunder Frank and Walter Rodney, framed their critiques of development with the grammar of the word itself. Like “innovation,” “development” is a progressive noun, which indicates an ongoing process in time. Its temporal and agential imprecision—when will the process ever end? Can it? Who is in charge?—helps to lend development a sense of moral and political neutrality, which it shares with “innovation.” Frank titled his most famous book on the subject The Development of Underdevelopment, the title emphasizing the point that underdevelopment was not a mere absence of development, but capitalist development’s necessary product. Rodney’s book How Europe Underdeveloped Africa did something similar, by making “underdevelop” into a transitive verb, rather than treating “underdevelopment” as a neutral condition.[3]
As Luc Boltanski and Eve Chiapello argue, this language of neutrality became a hallmark of European accounts of global poverty and underdevelopment after the 1960s. According to their survey of economics and development literature, the category of “exclusion” (and its opposite number, “empowerment”) and the gradual disappearance of “exploitation” from economic and humanitarian literature about poverty. No single person, firm, institution, party, or class is responsible for “exclusion,” Boltanksi and Chiapello explain. Reframing exploitation as exclusion therefore “permits identification of something negative without proceeding to level accusations. The excluded are no one’s victims” (2007, 347 & 354). Exploitation is a circumstance that enriches the exploiter; the poverty that results from exclusion, however, is a misfortune profiting no one. Consider, as an example, the mission statement of the Grameen Foundation, which Yunus founded. It remains one of the leading microlenders in the world, devoted to bringing impoverished people in the global South, especially women, into the financial system through the provision of small, low-collateral loans. “Empowerment” and “innovation” are two of its core values. “We champion innovation that makes a difference in the lives of the poor,” runs one plank of the Foundation’s mission statement (Grameen Foundation India nd). “We seek to empower the world’s poor, especially the poorest women.” “Innovation” is often not defined in such statements, but rather treated as self-evidently meaningful. Like “development,” innovation is a perpetually ongoing process, with no clear beginning or end. One undergoes development to achieve development; innovation, in turn, is the pursuit of innovation, and as soon as one innovates, the innovation thus created soon ceases to be an innovation. This wearying semantic circle helps evacuate the processes of its power dynamics, of winners and losers. As Evgeny Morozov (2014, 5) has argued about what he calls “solutionism,” the celebration of technological and design fixes approaches social problems like inequality, infrastructural collapse, inadequate housing, etc.—which might be regarded as results of “exploitation”—as intellectual puzzles for which we simply have to discover the solutions. The problems are not political; rather, they are conceptual: we either haven’t had the right ideas, or else we haven’t applied them right.[4] Grameen’s mission, to bring the world’s poorest into financial markets that currently do not include them, relies on a fundamental presumption: that the global financial system is something you should definitely want to be a part of.[5] But as Banerjee et. al (2015: 23) have argued, to the extent that microcredit programs offer benefits, they mostly accrue to already profitable businesses. The broader social benefits touted by the programs—women’s “empowerment,” more regular school attendance, and so on—were either negligible or non-existent. And as a local government official in the Indian province of Anhan Pradesh told the New York Times in 2010, microloan programs in his district had not proven to be less exploitative than their predecessors, only more remote. “The money lender lives in the community,” he said. “At least you can burn down his house” (Polgreen and Bajaj 2010).
Humanitarian Innovation and the Idea of “The Poor”
Yunus’s TED Talk and the Grameen Foundation’s mission statement draw on the twinned ideal of innovation as procedure and salvation, and in so doing they recapitulate development’s modernist faith in the leveling possibilities of technology, albeit with the individualist, market-based zeal that is particular to neoliberal innovation thinking. “Humanitarian innovation” is a growing subfield of international development theory, which, like “social innovation,” encourages market-based solutions to poverty. Most scholars date the concept to the 2009 fair held by ALNAP (Active Learning Network for Accountability and Performance in Humanitarian Action), an international humanitarian aid agency that measures and evaluates aid programs. Two of its leading academic proponents, Alexander Betts and Louise Bloom of the Oxford Humanitarian Innovation Project, define it thusly:
“Innovation is the way in which individuals or organizations solve problems and create change by introducing new solutions to existing problems. Contrary to popular belief, these solutions do not have to be technological and they do not have to be transformative; they simply involve the adaptation of a product or process to context. ‘Humanitarian’ innovation may be understood, in turn, as ‘using the resources and opportunities around you in a particular context, to do something different to what has been done before’ to solve humanitarian challenges” (Betts and Bloom 2015, 4).[6]
Here and elsewhere, the HIP hews closely to conventional Schumpeterian definitions of the term, which indeed inform most uses of “innovation” in the private sector and elsewhere: as a means of “solving problems.” Read in this light, “innovation” might seem rather innocuous, even banal: a handy way of naming a human capacity for adaptation, improvisation, and organization. But elsewhere, the authors describe humanitarian innovation as an urgent response to very specific contemporary problems that are political and ecological in nature. “Over the past decade, faced with growing resource constraints, humanitarian agencies have held high hopes for contributions from the private sector, particularly the business community,” they write. Compounding this climate of economic austerity that derives from “growing resource constraints” is an environmental and geopolitical crisis that means “record numbers of people are displaced for longer periods by natural disasters and escalating conflicts.” But despite this combination of violence, ecological degradation, and austerity, there is hope in technology: “new technologies, partners, and concepts allow humanitarian actors to understand and address problems quickly and effectively” (Betts and Bloom 2014, 5-6).
The trope of “exclusion,” and its reliance on a rather anodyne vision of the global financial system as a fair sorter of opportunities and rewards, is crucial to a field that counsels collaboration with the private sector. Indeed, humanitarian innovators adopt a financial vocabulary of “scaling,” “stakeholders,” and “risk” in assessing the dangers and effectiveness (the “cost” and “benefits”) of particular tactics or technologies. In one paper on entrepreneurial activity in refugee camps, de la Chaux and Haugh make an argument in keeping with innovation discourse’s combination of technocratic proceduralism and utopian grandiosity: “Refugee camp entrepreneurs reduce aid dependency and in so doing help to give life meaning for, and confer dignity on, the entrepreneurs,” they write, emphasizing in their first clause the political and economic austerity that conditions the “entrepreneurial” response (2014, 2). Relying on an exclusion paradigm, the authors point to a “lack of functioning markets” as a cause of poverty in the camps. By “lack of functioning markets,” de la Chaux and Haugh mean lack of capital—but “market,” in this framework, becomes simply an institutional apparatus which one enters and is adjudicated on one’s merits, rather than a field of conflict in which one labors in a globalized class society. At the same time, “innovation” that “empowers” the world’s “poorest” also inherits an enduring faith in technology as a universal instrument of progress. One of the preferred terms for this faith is “design”: a form of techne that, two of its most famous advocates argue, “addresses the needs of the people who will consume a product or service and the infrastructure that enables it” (Brown and Wyatt, 2010).[7] The optimism of design proceeds from the conviction that systems—water safety, nutrition, etc.—fail because they are designed improperly, without input from their users. De la Chaux addresses how ostensibly temporary camps grow into permanent settlements, using Jordan’s Za’atari refugee camp near the Syrian border as an example. Her elegant solution to the infrastructural problems these under-resourced and overpopulated communities experience? “Include urban planners in the early phases of the humanitarian emergency to design out future infrastructure problems,” as if the political question of resources is merely secondary to technical questions of design and expertise (de la Chaux and Haugh 2014, 19; de la Chaux 2015).
In these examples, we can see once again how the ideal type of the “innovator” or entrepreneur emerges as the protagonist of the historical and economic drama unfolding in the peripheral spaces of the world economy. The humanitarian innovator is a flexible, versatile, pliant, and autonomous individual, whose potential is realized in the struggle for wealth accumulation, but whose private zeal for accumulation is thought to benefit society as a whole.[8] Humanitarian or social innovation discourse emphasizes the agency and creativity of “the poor,” by discursively centering the authority of the “user” or entrepreneur rather than the agency or the consumer. Individual qualities like purpose, passion, creativity, and serendipity are mobilized in the service of broad social goals. Yet while this sort of individualism is central in the literature of social and humanitarian innovation, it is not itself a radically new “innovation.” It instead recalls a pattern that Molly Geidel has recently traced in the literature and philosophy of the Peace Corps. In Peace Corps memoirs and in the agency’s own literature, she writes, the “romantic desire” for salvation and identification with the excluded “poor” was channeled into the “technocratic language of development” (2015, 64).
Innovation’s emphasis on the intellectual, spiritual, and creative faculties of single entrepreneur as historically decisive recapitulates in these especially individualistic terms a persistent thread in Cold War development thinking: its emphasis on cultural transformations as prerequisites for economic ones. At the same time, humanitarian innovation’s anti-bureaucratic ethos of autonomy and creativity is often framed as a critique of “developmentalism” as a practice and an industry. It is a response to criticisms of twentieth-century development as a form of neocolonialism, as too growth-dependent, too detached from local needs, too fixated on big projects, too hierarchical. Consider the development agency UNICEF, whose 2014 “Innovation Annual Report” embraces a vocabulary and funding model borrowed from venture capital. “We knew that we needed to help solve concrete problems experienced by real people,” reads the report, “not just building imagined solutions at our New York headquarters and then deploy them” (UNICEF 2014, 2). Rejecting a hierarchical model of modernization, in which an American developmentalist elite “deploys” its models elsewhere, UNICEF proposes “empowerment” from within. And in place of “development,” as a technical process of improvement from a belated historical and economic position of premodernity, there is “innovation,” the creative capacity responsive to the desires and talents of the underdeveloped.
As in the social innovation model promoted by the Stanford Business School and the ideal of “empowerment” advanced by Grameen, the literature of humanitarian innovation sees “the market” as a neutral field. The conflict between the private sector, military, other non-humanitarian actors in the process of humanitarian innovation is mitigated by considering each as an equivalent “stakeholder,” with a shared “investment” in the enterprise and its success; abuse of the humanitarian mission by profit-seeking and military “stakeholders” can be prevented via the fabrication of “best practices” and “voluntary codes of conduct” (Betts and Bloom 2015, 24) One report, produced for ALNAP along with the Humanitarian Innovation Fund, draws on Everett Rogers’s canonical theory of innovation diffusion. Rogers taxonomizes and explains the ways innovative products or methods circulate, from the most forward-thinking “early adopters” to the “laggards” (1983, 247-250). The ALNAP report does grapple with the problems of importing profit-seeking models into humanitarian work, however. “In general,” write Obrecht and Warner (2014, 80-81), “it is important to bear in mind that the objective for humanitarian scaling is improvement to humanitarian assistance, not profit.” Here, the problem is explained as one of “diffusion” and institutional biases in non-profit organizations, not a conflict of interest or a failing of the private market. In the humanitarian sector, they write, “early adopters” of innovations developed elsewhere are comparatively rare, since non-profit workers tend to be biased towards techniques and products they develop themselves. However, as Wendy Brown (2015, 129) has recently argued about the concepts of “best practices” and “benchmarking,” the problem is not necessarily that the goals being set or practices being emulated are intrinsically bad. The problem lies in “the separation of practices from products,” or in other words, the notion that organizational practices translate seamlessly across business, political, and knowledge enterprises, and that different products—market dominance, massive profits, reliable electricity in a rural hamlet, basic literacy—can be accomplished via practices imported from the business world.
Again, my objective here is not to evaluate the success of individual initiatives pursued under this rubric, nor to castigate individual humanitarian aid projects as irredeemably “neoliberal” and therefore beyond the pale. To do so basks a bit too easily in the comfort of condemnation that the pejorative “neoliberal” offers the social critic, and it runs the risk, as Ferguson (2009, 169) writes, of nostalgia for the era of “old-style developmental states,” which were mostly capitalist as well, after all.[9] Instead, my point is to emphasize the political work that “innovation” as a concept does: it depoliticizes the resource scarcity that makes it seem necessary in the first place by treating the private market as a neutral arbiter or helpful partner rather than an exploiter, and it does so by disavowing the power of a Western subject through the supposed humility and democratic patina of its rhetoric. For example, the USAID Development Innovation Ventures, which seeds projects that will win support from private lenders later, stipulates that “applicants must explain how they will use DIV funds in a catalytic fashion so that they can raise needed resources from sources other than DIV” (USAID 2017). The hoped-for innovation here, it would seem, is the skill with which the applicants accommodate the scarcity of resources, and the facility with which they commercialize their project. One funded project, an initiative to encourage bicycle helmets in Cambodia, “has the potential to save the Cambodian government millions of dollars over the next 10 years,” the description proclaims. But obviously, just because something saves the Cambodian government millions doesn’t mean there is a net gain for the health and safety of Cambodians. It could simply allow the Cambodian government to give more money away to private industry or buy $10 million worth of new weapons to police the Laotian border. “Innovation,” here, requires an adjustment to austerity.
Adjustment, often reframed positively as “resilience,” is a key concept in this literature. In another report, Betts, Bloom, and Weaver (2015, 8) single out a few exemplary innovators from the informal economy of the displaced person’s camp. They include tailors in a Syrian camp’s outdoor market; the Somali owner of an internet café in a Kenyan refugee camp; an Ethiopian man who repairs refrigerators with salvaged air conditioners and fans; and a Ugandan who built a video-game arcade in a settlement near the Rwandan border. This man, identified only as Abdi, has amassed a collection of second-hand televisions and game consoles he acquired in Kampala, the Ugandan capital. “Instead of waiting for donors I wanted to make a living,” says Abdi in the report, exemplifying the values of what Betts, Bloom, and Weaver call “bottom-up innovation” by the refugee entrepreneur. Their assessment is a generous one that embraces the ingenuity and knowledge of displaced and impoverished people affected by crisis. Top-down or “sector-wide” development aid, they write, “disregards the capabilities and adaptive resourcefulness that people and communities affected by conflict and disaster often demonstrate” (2015, 2). In this report, refugees are people of “great resilience,” whose “creativity” makes them “change makers.” As Julian Reid and Brad Evans write, we apply the word “resilient” to a population “insofar as it adapts to rather than resists the conditions of its suffering in the world” (2014, 81). The discourse of humanitarian innovation has the same concession to the inevitability of the structural conditions that make such resilience necessary in the first place. Nowhere is it suggested that refugee capitalists might be other than benevolent, or that inclusion in circuits of national and transnational capital might exacerbate existing inequalities, rather than transcend them. Furthermore, humanitarian innovation advocates never argue that market-based product and service “innovation” are, in a refugee context, beneficial to the whole, given the paucity of employment and services in affected communities; this would at least be an arguable point. The problem is that the question is never even asked. The market is like oxygen.
Conclusion: The TED Talk and the Innovation Romance
In 2003, I visited a recently-settled barrio settlement—one could call it a “shantytown”—perched on a hillside high above the east side of Caracas. I remember vividly a wooden, handmade press, ringed with barbed wire scavenged from a nearby business, that its owner, a middle-aged woman newly arrived in the capital, used to crush sugar cane into juice. It was certainly an innovation, by any reasonable definition: a novel, creative solution to a problem of scarcity, a new process for doing something. I remember being deeply impressed by the device, which I found brilliantly ingenious. What I never thought to call it, though, was a “solution” to its owner’s poverty. Nor, I am sure, did she; she lived in a hard-core chavista neighborhood, where dispossessing the country’s “oligarchs” would have been offered as a better innovation—in the old Emma Goldman sense. Therefore, it is not that individual ingenuity, creativity, fearlessness, hard work, and resistance to the impossible demands that transnational capital has placed on people like the video-game entrepreneur in Uganda, or that woman in Caracas, are disreputable things to single out and praise. Quite the contrary: my objection is to the capitulation to their exploitation that is smuggled in with this admiration.
I have argued that “innovation” is, at best, a vague concept asked to accommodate far too much in its combination of heroic and technocratic meanings. Innovation, in its modern meaning, is about revolutionizing “process” and technique: this often leaves outcomes unexamined and unquestioned. The outcome of that innovative sugar cane press in Caracas is still a meager income selling juice in a perilous informal marketplace. The promiscuity of innovation’s use also makes it highly mobile and subject to abuse, as even enthusiastic users of the concept, like Betts and Bloom at the Oxford Humanitarian Innovation Project, acknowledge. As they caution, “use of the term in the humanitarian system has lacked conceptual clarity, leading to misuse, overuse, and the risk that it may become hollow rhetoric” (2014, 5). I have also argued that innovation, especially in the context of neoliberal development, must be understood in moral terms, as it makes a virtue of private accumulation and accomodation to scarcity, and it circulates an ego-ideal of the first-world self to an audience of its admirers. It is also an ideological celebration of what Harvey calls the neoliberal alignment of individual well-being with unregulated markets, and what Brown calls “the economization of the self” (2015, 33). Finally, as a response to the enduring crises of third-world poverty, exacerbated by the economic and ecological dangers of the twenty-first century, the language of innovation beats a pessimistic retreat from the ideal of global equality that, in theory at least, development in its manifold forms always held out as its horizon.
Innovation discourse draws on deep wells—its moral claim is not new, as a reader of The Protestant Ethic and the Spirit of Capitalism will observe. Inspired in part by the example of Benjamin Franklin’s autobiography, Max Weber argued that capitalism in its ascendancy reimagined profit-seeking activities, which might once have been described as avaricious or vulgar as a virtuous “ethos” (2001, 16-17). Capitalism’s challenge to tradition, Weber argued, demanded some justification; reframing business as a calling or a vocation could help provide one. Capitalism in our time demands still demands validation not only as a virtuous discipline, but as an enterprise devoted to serving the “common good,” write Boltanski and Chiapello. As they say, “an existence attuned to the requirements of accumulation must be marked out for a large number of actors to deem it worth the effort of being lived” (2007, 10-11). “Innovation” as an ideology marks out this sphere of purposeful living for the contemporary managerial classes. Here, again, the word’s close association with “creativity” is instrumental, since creativity is often thought to be an intrinsic, instinctual human behavior. “Innovating” is therefore not only a business practice that will, as Franklin argued about his own industriousness, improve oneself in the eyes of both man and God. It is also a secular expression of the most fundamental individual and social features of the self—the impulse to understand and to improve the world. This is particularly evident in the discourse of social innovation, which the Social Innovation Lab at Stanford defines as a practice that aims to leverage the private market to solve modern society’s most intractable “problems”: housing, pollution, hunger, education, and so on. When something like world hunger is described as a “problem” in this way, though, international food systems, agribusiness, international trade, land ownership, and other sources of malnutrition disappear. Structures of oppression and inequality simply become discrete “problems” for no one has yet invented the fix. They are individual nails in search of a hammer, and the social innovator is quite confident that a hammer exists for hunger.
Microfinance is another one of these hammers. As one economist critical of the microcredit system notes at the beginning of his own book on the subject, “most accounts of microfinance—the large-scale, businesslike provision of financial services to poor people—begin with a story” (Roodman 2012, 1). These are usually some narrative of an encounter with a sympathetic third-world subject. For Roodman, the microfinancial stories of hardship and transcendence have a seductive power over their first-world audiences, of which he is legitimately suspicious. As we saw above, Schumpeter’s procedural “entrepreneurial function” is itself also a story of a creative entrepreneur navigating the tempests of modern capitalism. In the postmodern romance of social innovation in the “underdeveloped” world, the Western subject of the drama is both ever-present and constantly disavowed. The TED Talk, with which we began, is in its crude way the most expressive genre of this contemporary version of the entrepreneurial romance.
Rhetorically transformative but formally archaic—what could be less innovative than a lecture?—the genre of the social innovation TED Talk models innovation ideology’s combination of grandiosity and proceduralism, even as its strict generic conventions—so often and easily parodied—repeatedly undermine the speakers’ regular claims to transcendent breakthroughs. For example, in his TEDx Montreal address, Ethan Kay (2012) began in the conventional way: with a dire assessment of a monumental, yet easily overlooked, social problem in a third-world country. “If we were to think about the biggest problems affecting our world,” Kay begins, “any socially conscious person would have to include poverty, disease, and climate change. And yet there is one thing that causes all three of these simultaneously, that we pay almost no attention to, even though a very good solution exists.” Having established the scope of the problem, next comes the sentimental identification. The knowledge of this social problem is only possible because of the hospitality and insight of some poor person abroad, something familiar from Geidel’s reading of Peace Corps memoirs and Roodman’s microcredit stories: in Kay’s case, it is in the unelectrified “hut” of a rural Indian woman where, choking on cooking smoke, he realizes the need for a clean-burning indoor cookstove. Then comes the self-deprecating joke, in which the speaker acknowledges his early naivete and establishes his humble capacity for self-reflection. (“I’m just a guy from Cleveland, Ohio, who has trouble cooking a grilled-cheese sandwich,” says Kay, winning a few reluctant laughs.) And then, the technocratic solution emerges: when the insight thus acquired is subjected to the speaker’s reason and empathy, the deceptively simple and yet world-making “solution” emerges. Despite the prominent formal place of the underdeveloped character in this genre, the teller of the innovation story inevitably ends up the hero. The throat-clearing self-seriousness, the ritualistic gestures of humility, the promise to the audience of transformative change without inconvenient political consequences, and the faith in technology as a social leveler all perform the TED Talk’s ego-ideal of social “innovation.”
One of the most successful social innovation TED Talks is Mitra’s tale of the “self-organized learning environment” (SOLE). Mitra won a $1 million prize from TED in 2013 for a talk based on his “hole-in-the-wall” experiment in New Delhi, which tests poor children’s ability to learn autonomously, guided only by internet-enabled laptops and cloud-based adult mentors abroad. (Ted.com 2013). Mitra’s idea was an excellent example of innovation discourse’s combination of the procedural and the prophetic. In the case of the latter, he begins: “There was a time when Stone Age men and women used to sit and look up at the sky and say, ‘What are those twinkling lights?’ They built the first curriculum, but we’ve lost sight of those wondrous questions” (Mitra 2013). What gets us to this lofty goal, however, is a comparatively simple process. True to genre, Mitra describes the SOLE as the fruit of a serendipitous discovery. After cutting a hole in the wall that separated his technology firm’s offices from an adjoining New Delhi slum, they placed an Internet-enabled computer in the new common area. When he returned weeks later, Mitra found local children using it expertly. Leaving unsupervised children in a room with a laptop, it turns out, activates innate capacities for self-directed learning stifled by conventional schooling. Mitra promises a cost-effective solution to the problem of primary and secondary education in the developing world—do virtually nothing. “This is done by children without the help of any teacher,” Mitra confidently concludes, sharing a PowerPoint slide of the students’ work. “The teacher only raises the question, and then stands back and admires the answer.”
When we consider innovation’s religious origins in false prophecy, its current orthodoxy in the discourse of technological evangelism—and, more broadly, in analog versions of social innovation—is often a nearly literal example of Rayvon Fouché’s argument that the formerly colonized, “once attended to by bibles and missionaries, now receive the proselytizing efforts of computer scientists wielding integrated circuits in the digital age” (2012, 62). One of the additional ironies of contemporary innovation ideology, though, is that these populations exploited by global capitalism are increasingly charged with redeeming it—the comfortable denizens of the West need only “stand back and admire” the process driven by the entrepreneurial labor of the newly digital underdeveloped subject. To the pain of unemployment, the selfishness of material pursuits, the exploitation of most of humanity by a fraction, the specter of environmental cataclysm that stalks our future and haunts our imagination, and the scandal of illiteracy, market-driven innovation projects like Mitra’s “hole in the wall” offer next to nothing, while claiming to offer almost everything.
_____
John Patrick Leary is associate professor of English at Wayne State University in Detroit and a visiting scholar in the Program in Literary Theory at the Universidade de Lisboa in Portugal in 2019. He is the author of A Cultural History of Underdevelopment: Latin America in the U.S. Imagination (Virginia 2016) and Keywords: The New Language of Capitalism, forthcoming in 2019 from Haymarket Books. He blogs about the language and culture of contemporary capitalism at theageofausterity.wordpress.com.
[1] “The entrepreneur and his function are not difficult to conceptualize,” Schumpeter writes: “the defining characteristic is simply the doing of new things or the doing of things that are already being done in a new way (innovation).”
[2] The term “underdeveloped” was only a bit older: it first appeared in “The Economic Advancement of Under-developed Areas,” a 1942 pamphlet on colonial economic planning by a British economist, Wilfrid Benson.
[3] I explore this semantic and intellectual history in more detail in my book, A Cultural History of Underdevelopment (Leary, 4-10).
[4] Morozov describes solutionism as an ideology that sanctions the following delusion: “Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!”
[5] “Although the number of unbanked people globally dropped by half a billion from 2011 to 2014,” reads a Foundation web site’s entry under the tab “financial services”, “two billion people are still locked out of formal financial services.” One solution to this problem focuses on Filipino convenience stores, called “sari-sari” stores: “In a project funded by the JPMorgan Chase Foundation, Grameen Foundation is empowering sari-sari store operators to serve as digital financial service agents to their customers.” Clearly, the project must result not only in connecting customers to financial services, but in opening up new markets to JP Morgan Chase. See “Alternative Channels.”
[6] This quoted definition of “humanitarian innovation” is attributed to an interview with an unnamed international aid worker.
[7] Erickson (2015, 113-14) writes that “design thinking” in public education “offers the illusion that structural and institutional problems can be solved through a series of cognitive actions…” She calls it “magic, the only alchemy that matters.”
[8] A management-studies article on the growth of so-called “innovation prizes” for global development claimed sunnily that at a recent conference devoted to such incentives, “there was a sense that society is on the brink of something new, something big, and something that has the power to change the world for the better” (Everett, Wagner, and Barnett 2012, 108).
[9] “It is here that we have to look more carefully at the ‘arts of government’ that have so radically reconfigured the world in the last few decades,” writes Ferguson, “and I think we have to come up with something more interesting to say about them than just that we’re against them.” Ferguson points out that neoliberalism in Africa—the violent disruption of national markets by imperial capital—looks much different than it does in western Europe, where it usually treated as a form of political rationality or an “art of government” modeled on markets. It is the political rationality, as it is formed through an encounter with the “third world” object of imperial neoliberal capital, that is my concern here.
_____
Works Cited
Bacon, Francis. 1844. The Works of Francis Bacon, Lord Chancellor of England. Vol. 1. London: Carey and Hart.
Banerjee, Abhijit, et al. 2015. “The Miracle of Microfinance? Evidence from a Randomized Evaluation.” American Economic Journal: Applied Economics 7:1.
Betts, Alexander, Louise Bloom, and Nina Weaver. 2015. “Refugee Innovation: Humanitarian Innovation That Starts with Communities.” Humanitarian Innovation Project, University of Oxford.
Betts, Alexander and Louise Bloom. 2014. “Humanitarian Innovation: The State of the Art.” OCHA Policy and Studies Series.
Boltanski, Luc and Eve Chiapello. 2007. The New Spirit of Capitalism. Translated by Gregory Elliot. New York: Verso.
De la Chaux, Marlen and Helen Haugh. 2014. “Entrepreneurship and Innovation: How Institutional Voids Shape Economic Opportunities in Refugee Camps.” Judge Business School, University of Cambridge,
Erickson, Megan. 2015. Class War: The Privatization of Childhood. New York: Verso.
Everett, Bryony, Erika Wagner, and Christopher Barnett. 2012. “Using Innovation Prizes to Achieve the Millennium Development Goals.” Innovations: Technology, Governance, Globalization 7:1.
Ferguson, James. 2009. “The Uses of Neoliberalism.” Antipode 41:S1.
Fouché, Rayvon. 2012. “From Black Inventors to One Laptop Per Child: Exporting a Racial Politics of Technology.” In Race after the Internet, edited by Lisa Nakamura and Peter Chow-White. New York: Routledge. 61-84
Frank, Andre Gunder. 1991. The Development of Underdevelopment. Stockholm, Sweden: Bethany Books.
Geidel, Molly. 2015. Peace Corps Fantasies: How Development Shaped the Global Sixties. Minneapolis: University of Minnesota Press.
Gilman, Nils. 2003. Mandarins of the Future: Modernization Theory in Cold War America. Baltimore: Johns Hopkins University Press, 2003.
Godin, Benoit. 2015. Innovation Contested: The Idea of Innovation Over the Centuries. New York: Routledge.
Morozov, Evgeny. 2014. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: Public Affairs.
Moss, Frank. 2011. The Sorcerers and Their Apprentices: How the Digital Magicians of the MIT Media Lab Are Creating the Innovative Technologies that Will Transform Our Lives. New York: Crown Business.
National Economic Council and Office of Science and Technology Policy. 2015. “A Strategy for American Innovation.” Washington, DC: The White House.
North, Michael. 2013. Novelty: A History of the New. Chicago: University of Chicago Press.
Nustad, Knut G. 2007. “Development: The Devil We Know?” In Exploring Post-Development: Theory and Practice, Problems and Perspectives, edited by Aram Ziai. London: Routledge. 35-46.
Obrecht Alice and Alexandra T. Warner. 2014. “More than Just Luck: Innovation in Humanitarian Action.” London: ALNAP/ODI.
O’Connor, Kevin and Paul B. Brown. 2003. The Map of Innovation: Creating Something Out of Nothing. New York: Crown.
Peters, Tom. 1999. The Circle of Innovation: You Can’t Shrink Your Way to Greatness. New York: Vintage.
Polgreen, Lydia and Vikas Bajaj. 2010. “India Microcredit Faces Collapse From Defaults.” The New York Times (Nov 17).
Rodney, Walter. 1981. How Europe Underdeveloped Africa. Washington, DC: Howard University Press.
Ross, Kristin. 1996. Fast Cars, Clean Bodies: Decolonization and the Reordering of French Culture. Cambridge, MA: The MIT Press.
Rostow, Walter. 1965. The Stages of Economic Growth: A Non-Communist Manifesto. New York: Cambridge University Press.
Reid, Julian and Brad Evans. 2014. Resilient Life: The Art of Living Dangerously. New York: John Wiley and Sons.
Rogers, Everett M. 1983. Diffusion of Innovations. Third edition. New York: The Free Press.
Roodman, David. 2012. Due Diligence: An Impertinent Inquiry into Microfinance. Washington, D.C.: Center for Global Development.
Saldaña-Portillo, Josefina. 2003. The Revolutionary Imagination in the Americas and the Age of Development. Durham, NC: Duke University Press.
Schumpeter, Joseph. 1934. The Theory of Economic Development. Cambridge, MA. Harvard University Press.
Schumpeter, Joseph. 1941. “The Creative Response in Economic History,” The Journal of Economic History 7:2.
Schumpeter, Joseph. 2003. Capitalism, Socialism, and Democracy. London: Routledge.
Seitler, Ellen. 2005. The Internet Playground: Children’s Access, Entertainment, and Miseducation. New York: Peter Lang.
Shakespeare, William. 2005. Henry IV. New York: Bantam Classics.
By Audrey Watters
~
After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…
In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.
Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.
But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?
School as “Skills Training”
In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.
I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”
But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.
There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.
Nor is the promotion of a more business-focused education that new either.
Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.
The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.
The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”
That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.
It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.
Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).
It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.
Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.
That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,
The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.
Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.
Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.
According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.
For-Profit Higher Ed: Who’s Being Served?
The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)
The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.
According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.
That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)
The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.
Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):
It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?
Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)
Deming, Goldin, and Katz argue that
The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.
According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.
For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.
What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.
EQUIP and the New For-Profit Higher Ed
On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”
The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.
By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.
Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)
Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.
Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.
And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.
Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.
Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.
This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
When I first started to think about what I wanted to say here today, I thought I’d talk about innovation and how confused if not backwards the ed-tech industry’s obsession with that term is. I thought I’d tie in Jon Udell’s notion of “trailing edge innovations,” this idea that some of the most creating and interesting things don’t happen on the bleeding edge; they’re at a different perpendicular, if you will. Scratch – and before Scratch, LOGO – work there, tinkering from that angle.
So I started to think about movements from margin to center, about cultural, social, political, pedagogical change and why, from my vantage point at least, ed-tech is stuck – stuck chasing the wrong sorts of change.
We’ve been stuck there a while.
This is me and my brother, circa Christmas 1984. (I know it’s Christmas because that’s when we got the computer, and in this photo it hasn’t yet been moved to the basement.) We found this photo when we were cleaning out our dad’s house this summer. Yes, that’s us and the LOGO turtle. My thoughts about this photo are pretty complicated: going through family photo albums, you can see – sometimes quite starkly – when things change or when things get stuck. This photo was from “the good times”; later images, not so much. And this photo reminds me too of a missing piece: somehow my interest in computers then never really went anywhere. I didn’t have programming opportunities at school, and other than what I could tinker with on my own, I did t get much farther than basic (sic).
Stuck.
So I want to talk to you today about how we – ed-tech – get unstuck.
Someone asked me the other day why I’d been invited to speak at a conference on Scratch. “What are you going to say?!” they asked, (I think) a little apprehensively. Their fear, I have to imagine, was that I was going to come here and unload a keynote equivalent of 1984’s “Two Minutes of Hate” on an unsuspecting European audience, that I would shake my fist angrily and loudly condemn the Scratch Cat or something. Or something.
I get this a lot: demands that I answer the question “why do you hate education technology so much, Audrey?” in which I usually refrain from responding with the question “why do you hate reading comprehension so much, Internet stranger?”
I’d contend that this nervous, sometimes hostile reaction to my work highlights a trap that education technology finds itself in – a ridiculous belief that there can be only two possible responses to computers in education (or to computers in general): worship or hatred, adulation or acquiescence. “You’re either with us or against us”; you’re either for computers or against computers. You have to choose: technological progress or Luddism.
It’s a false choice, of course, and it mostly misses the point of what I try to do in my work as an education technology writer. Often what I’m trying to analyze is not so much about the actual technology at all: it’s about the ideology in which the technology is embedded, encased and from which it emerges; and it’s about what shape technologies seem to think teaching and learning, and the institutions that influence if not control those, should take.
To fixate solely on the technology is a symptom of what Seymour Papert has called “technocentric thinking,” something that he posited as quite different from what technology criticism should do. Technocentrism is something that technologists fall prey to, Papert contended; but it’s something that, just as likely, humanists are guilty of (admittedly, that’s another unhelpful divide, no doubt: technologists versus humanists).
“Combating technocentrism involves more than thinking about technology,” Papert wrote. And surely this is what education technology desperately needs right now. Why, for example, is there all the current excitement about ed-tech? Surely we can do better than an answer that accepts “because computers really matter now.” Why are venture capitalists investing in ed-tech at record levels? Why are schools now buying new hardware and software? Try again if your answer is “because the tech is so good.” A technocentric response points our attention to the technology itself – new tools, data, devices, apps, broadband, the cloud – as though these are context-free. Computer criticism, as outlined by Papert, demands we look more closely instead at policies, profits, politics, practices, power. Because it’s not “technological progress” than demands schools use computers. Indeed, rarely are computers used there for progressive means or ends at all.
Challenging technocentrism “leads to fundamental re-examination of assumptions about the area of application of technology with which one is concerned,” Papert wrote. “If we are interested in eliminating technocentrism from thinking about computers in education, we may find ourselves having to re-examine assumptions about education that were made long before the advent of computers.”
These passages comes from a 1987 essay “Computer Criticism vs. Technocentric Thinking,” in which Papert posited that education technology – or rather, the LOGO community specifically – needed to better develop its voice so that it could weigh in on the public dialogue about the burgeoning adoption of computers in schools. But what should that voice sound like? It had to offer more than a simple “pro-computers in the classroom” stance. And some three decades later, I think this is even more crucial. Uncritical techno-fascination and ed-tech fetishization – honestly, what purpose do those serve?
“There is no shortage of models” in trying to come up with a robust framework for computer criticism, Papert wrote back then. “The education establishment offers the notion of evaluation. Educational psychologists offer the notion of controlled experiment. The computer magazines have developed the idiom of product review. Philosophical tradition suggests inquiry into the essential nature of computation.” We can still see (mostly) these models applied to ed-tech today: “does it raise standardized test scores?” is one common way to analyze a product or service. “What new features does it boast?” is another. These approaches are insufficient, Papert argued, when it comes to thinking about ed-tech’s influence on learning, because they do nothing in helping us think broadly – rethink – our education system.
Papert suggested we turn to literary and social criticism as a model for computer criticism. Indeed, the computer is a medium of human expression, its development and its use a reflection of human culture; the computer is also a tool with a particular history, and although not circumscribed by its past, the computer is not entirely free of it either. I think we recognize history, legacy, systems in literary and social criticism; funny, folks get pretty irate when I point those out about ed-tech. “The name [computer criticism] does not imply that such writing would condemn computers any more than literary criticism condemns literature or social criticism condemns society,” Papert wrote. “The purpose of computer criticism is not to condemn but to understand, to explicate, to place in perspective. Of course, understanding does not exclude harsh (perhaps even captious) judgment. The result of understanding may well be to debunk. But critical judgment may also open our eyes to previously unnoticed virtue. And in the end, the critical and the creative processes need each other.”
I am, admittedly, quite partial to this framing of “computer criticism,” since it dovetails neatly with my own academic background. I’m not an engineer or an entrepreneur or (any longer) a classroom educator. I see myself as a cultural critic, formally trained in the study of literature, language, folklore. I’m interested in our stories and in our practices and in our cultures.
One of the flaws Papert identifies in “technocentrism” is that it gives centrality to the technology itself, reducing people and culture to a secondary level. Instead “computer criticism” should look at context, at systems, at politics, at power.
I would add to Papert’s ideas about “computer criticism,” those of other theorists. Consider Kant: criticism is self-knowledge, reflection, a counter to dogma, to those ideas that powerful systems demand we believe in. Ed-tech, once at the margins, is surely now dogma. Consider Hegel; consider Marx: criticism as antagonism, as dialectic, as intervention – stake a claim; stake a position; identify ideology. Consider Freire and criticism as pedagogy and pedagogy as criticism: change the system of schooling, and change the world.
It’s an odd response to my work, but a common one too, that criticism does not enable or effect change. (I suppose it does not fall into the business school model of “disruptive innovation.”) Or rather, that criticism stands as an indulgent, intellectual, purely academic pursuit – as though criticism involves theory but not action. Or if there is action, criticism implies “tearing down”; it has this negative connotation. Ed-tech entrepreneurs, to the contrary, actually “build things.”
Here’s another distinction I’ve heard: that criticism (in the form of writing an essay) is “just words” but writing software is “actually doing something.” Again, such a contrast reveals much about the role of intellectual activity that some see in “coding.”
That is a knotty problem, I think, for a group like this one to wrestle with (and why we need ed-tech criticism!). If we believe in “coding to learn” then what does it mean if we see “code” as distinct from or as absent of criticism? And here I don’t simply mean that a criticism-free code is stripped of knowledge, context, and politics; I mean that that framework in some ways conceptualizes code as the opposite of thinking deeply or thinking critically – that is, coding as (only) programmatic, mechanical, inflexible, rules-based. What are the implications of that in schools?
Technocentrism won’t help with thinking through that question. Technocentrism would be happier talking about “learning to code,” with the emphasis on “code” – “code” largely a signifier for technological know-how, an inherent and unexamined good.
As I was rereading Papert’s 1987 essay in preparation for this talk, I was struck – as I often am by his work – of how stuck ed-tech is. I mean, here he is, some 30 years ago, calling for the LOGO community to develop a better critique, frankly an activist critique about thinking and learning. “Do Not Ask What LOGO Can Do To People, But What People Can Do With LOGO.” Papert’s argument is not “why everyone should learn to code.”
Papert offers an activist critique. Criticism is activism. Criticism is a necessary tactic for this community, the Scratch community specifically and for the ed-tech community in general. It was necessary in 1987. It’s still necessary today – we might consider why we’re still at the point of having to make a case for ed-tech criticism too. It’s particularly necessary as we see funding flood into ed-tech, as we see policies about testing dictate the rationale for adopting devices, as we see the technology industry shape a conversation about “code” – a conversation that focuses on money and prestige but not on thinking, learning. Computer criticism can – and must – be about analysis and action. Critical thinking must work alongside critical pedagogical and technological practices. “Coding to learn” if you want to start there; or more simply, “learn by making.” But then too: making to reflect; making to think critically; making to engage with the world; it is from there, and only there, that we can get to making and coding to change the world.
Without ed-tech criticism, we’ll still be stuck – stuck without these critical practices, stuck without critical making or coding or design in school, stuck without critical (digital) pedagogy. And likely we’ll be stuck with a technocentrism that masks rather than uncovers let alone challenges power.
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)
So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.
Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.
I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.
And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.
Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.
I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.
I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.
The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.
Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.
Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.
I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.
I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.
There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.
Perhaps, yes.
But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.
Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.
By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.
And that right there is already a process of erasure, a different sort of mansplaining one might say.
Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.
Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)
“Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)
Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.
A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).
In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.
Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.
This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)
Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”
This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.
So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?
And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.
That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)
Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)
One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.
What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,
Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.
How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?
Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.
You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.
This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”
Case provides some examples of templated selves:
Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.
As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?
While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.
Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?
It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?
The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.
The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.
The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”
And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.
One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.
That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.
Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.
Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
“For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926
Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).
It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.
Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”
The Automatic Teacher
Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:
The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:
To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.
To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.
The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.
It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.
A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:
UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.
In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.
Petrina writes that,
After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.
The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.
The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)
The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.
Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”
But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”
Indeed.
Automation and Individualization
In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that
“Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”
Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.
But as Pretina argues, “the effect of automation was control and standardization.”
The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.
The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.
a review of Dana Goldstein, The Teacher Wars: A History of America’s Most Embattled Profession (Doubleday, 2014)
by Audrey Watters
~
Teaching is, according to the subtitle of education journalist Dana Goldstein’s new book, “America’s Most Embattled Profession.” “No other profession,” she argues, ”operates under this level of political scrutiny, not even those, like policing or social work, that are also tasked with public welfare and are paid for with public funds.”
That political scrutiny is not new. Goldstein’s book The Teacher Wars chronicles the history of teaching at (what has become) the K–12 level, from the early nineteenth century and “common schools” — that is, before before compulsory education and public school as we know it today — through the latest Obama Administration education policies. It’s an incredibly well-researched book that moves from the feminization of the teaching profession to the recent push for more data-driven teacher evaluation, observing how all along the way, teachers have been deemed ineffectual in some way or another — failing to fulfill whatever (political) goals the public education system has demanded be met, be those goals be economic, civic, or academic.
As Goldstein describes it, public education is a labor issue; and it has been, it’s important to note, since well before the advent of teacher unions.
The Teacher Wars and Teaching Machines
To frame education this way — around teachers and by extension, around labor — has important implications for ed-tech. What happens if we examine the history of teaching alongside the history of teaching machines? As I’ve argued before, the history of public education in the US, particularly in the 20th century, is deeply intertwined with various education technologies – film, TV, radio, computers, the Internet – devices that are often promoted as improving access or as making an outmoded system more “modern.” But ed-tech is frequently touted too as “labor-saving” and as a corrective to teachers’ inadequacies and inefficiencies.
It’s hardly surprising, in this light, that teachers have long looked with suspicion at new education technologies. With their profession constantly under attack, many teacher are worried no doubt that new tools are poised to replace them. Much is said to quiet these fears, with education reformers and technologists insisting again and again that replacing teachers with tech is not the intention.
And yet the sentiment of science fiction writer Arthur C. Clarke probably does resonate with a lot of people, as a line from his 1980 Omni Magazine article on computer-assisted instruction is echoed by all sorts of pundits and politicians: “Any teacher who can be replaced by a machine should be.”
Of course, you do find people like former Washington DC mayor Adrian Fenty – best known arguably via his school chancellor Michelle Rhee – who’ll come right out and sayto a crowd of entrepreneurs and investors, “If we fire more teachers, we can use that money for more technology.”
So it’s hard to ignore the role that technology increasingly plays in contemporary education (labor) policies – as Goldstein describes them, the weakening of teachers’ tenure protections alongside an expansion of standardized testing to measure “student learning,” all in the service finding and firing “bad teachers.” The growing data collection and analysis enabled by schools’ adoption of ed-tech feeds into the politics and practices of employee surveillance.
Just as Goldstein discovered in the course of writing her book that the current “teacher wars” have a lengthy history, so too does ed-tech’s role in the fight.
As Sidney Pressey, the man often credited with developing the first teaching machine, wrote in 1933 (from a period Goldstein links to “patriotic moral panics” and concerns about teachers’ political leanings),
There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the school will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines — not at all for the mechanizing of education but for the freeing of teacher and pupil from the educational drudgery and incompetence.
Or as B. F. Skinner, the man most associated with the development of teaching machines, wrote in 1953 (one year before the landmark Brown v Board of Education),
Will machines replace teachers? On the contrary, they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. He may teach more students than heretofore — this is probably inevitable if the world-wide demand for education is to be satisfied — but he will do so in fewer hours and with fewer burdensome chores.
These quotations highlight the longstanding hopes and fears about teaching labor and teaching machines; they hint too at some of the ways in which the work of Pressey and Skinner and others coincides with what Goldstein’s book describes: the ongoing concerns about teachers’ politics and competencies.
The Drudgery of School
One of the things that’s striking about Skinner and Pressey’s remarks on teaching machines, I think, is that they recognize the “drudgery” of much of teachers’ work. But rather than fundamentally change school – rather than ask why so much of the job of teaching entails “burdensome chores” – education technology seems more likely to offload that drudgery to machines. (One of the best contemporary examples of this perhaps: automated essay grading.)
This has powerful implications for students, who – let’s be honest – suffer through this drudgery as well.
Goldstein’s book doesn’t really address students’ experiences. Her history of public education is focused on teacher labor more than on student learning. As a result, student labor is missing from her analysis. This isn’t a criticism of the book; and it’s not just Goldstein that does this. Student labor in the history of public education remains largely under-theorized and certainly underrepresented. Cue AFT president Al Shanker’s famous statement: “Listen, I don’t represent children. I represent the teachers.”
But this question of student labor seems to be incredibly important to consider, particularly with the growing adoption of education technologies. Students’ labor – students’ test results, students’ content, students’ data – feeds the measurements used to reward or punish teachers. Students’ labor feeds the algorithms – algorithms that further this larger narrative about teacher inadequacies, sure, and that serve to financially benefit technology, testing, and textbook companies, the makers of today’s “teaching machines.”
Teaching Machines and the Future of Collective Action
The promise of teaching machines has long been to allow students to move “at their own pace” through the curriculum. “Personalized learning,” it’s often called today (although the phrase often refers only to “personalization” in terms of the pace, not in terms of the topics of inquiry). This means, supposedly, that instead of whole class instruction, the “work” of teaching changes: in the words of one education reformer, “with the software taking up chores like grading math quizzes and flagging bad grammar, teachers are freed to do what they do best: guide, engage, and inspire.”
Again, it’s not clear how this changes the work of students.
So what are the implications – not just pedagogically but politically – of students, their headphones on staring at their individual computer screens working alone through various exercises? Because let’s remember: teaching machines and all education technologies are ideological. What are the implications – not just pedagogically but politically – of these technologies’ emphasis on individualism, self-management, personal responsibility, and autonomy?
What happens to discussion and debate, for example, in a classroom of teaching machines and “personalized learning”? What happens, in a world of schools catered to individual student achievement, to the community development that schools (at their best, at least) are also tasked to support?
What happens to organizing? What happens to collective action? And by collectivity here, let’s be clear, I don’t mean simply “what happens to teachers’ unions”? If we think about The Teacher Wars and teaching machines side-by-side, we should recognize our analysis of (our actions surrounding) the labor issues of school need to go much deeper and more farther than that.
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.