Even if they haven’t seen the movie, people above a certain age will remember Jack Nicholson’s final speech in A Few Good Men: “You don’t want the truth, because deep down in places you don’t talk about at parties, you want me on that wall. You need me on that wall.” Nicholson, a colonel in the Marines, is confessing to his guilt for having had one of his men beaten to death. He confesses because he believes he was right, and he believes that, deep down in places they don’t talk about at parties, his fellow Americans know he was right. Sometimes defending the nation will require breaking the rules. It will require getting your hands dirty.
In the midst of America’s many high-energy debates about immigration and the building and manning of walls, there is a simple moral truth that has been overlooked. It’s that truth, I think, that has made this maiden effort by Aaron Sorkin one of the most quoted speeches in Hollywood history. It’s the same truth that gives such emotional sizzle to the formula “thank you for your service,” and does so even when those words sound, as they often do, and not just to veterans, shallow, ignorant, and insufficient. The truth is that we depend on people far away over the horizon, doing and suffering unspeakable things so that we can live our more or less ordinary, more or less comfortable lives. We are the beneficiaries of their labors. And we know it.
This is clear enough where the subject is the uniformed men and women who are placed, as the saying goes, “in harm’s way.” As an Air Force pilot told journalist David Wood in 2014, “There are two kinds of people: those who serve, and those who expect to be served.” The thing is, this division of humanity doesn’t only apply to civilians thinking about what is done and suffered by soldiers. As the pilot’s words involuntarily suggest, it also applies to patrons being served in a restaurant–very likely by people who have also come from somewhere beyond the horizon. It applies to anyone who has a cup of coffee or checks her iPhone. We are also the beneficiaries of the people who cultivated the coffee beans and put the chips in the iPhone. Many of whom have to deal with as much harm and unpleasantness as the soldiers who serve the country overseas.
They too get their hands dirty. Perhaps dirtier. And again, we know it. The rash of suicides at Foxconn, where many of the chips are manufactured, became common knowledge in 2010, as did the installing of suicide nets to stop more workers from throwing themselves off the roof and further threats of mass suicide in 2016. Brazil, the world’s largest coffee producer, has been accused of exploiting its workers under conditions “analogous to slavery.” When we pronounce the innocent-sounding words “global economic inequality,” what we’re talking about is violence on the other side of the wall.
In spite of this knowledge, little is being done about global economic inequality. Why not? It’s not enough to say that poor foreigners don’t vote in American elections. They don’t but neither do many poor Americans. Where Americans feel responsible, they are often willing to take some sort of action. The problem is that most people don’t feel responsible–don’t feel personally responsible–for global economic inequality. And as Yascha Mounk argued in The Age of Responsibility: Luck, Choice and the Welfare State, published by Harvard last year, we have been told again and again that the only real responsibility is personal responsibility.
That’s why it’s good to remember “thank you for your service.”
Anyone who pronounces those words of heartfelt gratitude or resonates to them when they are pronounced by others is offering evidence that they do, after all, believe in collective responsibility. Collective responsibility: our responsibility as beneficiaries of the system to feel the weight of what is done on our behalf beyond the horizon and to make sure that those who do it are justly rewarded for it. If we are capable of feeling collectively responsible for the actions of the military, then we should be able to expand the geographical and social scale of our gratitude. Why should it not extend from those who serve not with arms, but by their work? Why should it not pass from Americans on the wall (whom you may still want to reserve the right to judge) to non-Americans in the fields, on the assembly lines, and sometimes trying to escape violence by passing over to our side of the wall? Deep down, in places you don’t talk about at parties, you know you owe them, too, a debt.
Bruce Robbins is the author of The Beneficiary, which came out from Duke University Press in December 2017.
With bright pink hair and a rainbow horn, the disembodied head of a unicorn bobs back and forth to the opening beats of Big Boi’s “All Night.” Moments later, a pile of poop appears and mouths the song’s opening words, and various animated animal heads appear nodding along in sequence. Soon the unicorn returns, lip-synching the song, and it is quickly joined by a woman whose movements, facial expressions, and exaggerated enunciations sync with those of the unicorn. As a pig, a robot, a chicken, and a cat appear to sing in turn it becomes clear that the singing emojis are actually mimicking the woman – the cat blinks when she blinks, it raises its brow when she does. The ad ends by encouraging users to “Animoji” themselves, something which is evidently doable with Apple’s iPhone X. It is a silly ad, with a catchy song, and unsurprisingly it tells the viewer nothing about where, how, or by whom the iPhone X was made. The ad may playfully feature the ever-popular “pile of poop” emoji, but the ad is not intended to make potential purchasers feel like excrement.
And yet there is much more to the iPhone X’s history than the words on the device’s back “Designed by Apple in California. Assembled in China.” In Goodbye iSlave: a Manifesto for Digital Abolition, Jack Linchuan Qiu removes the phone’s shiny case to explore what “assembled in China” really means. As Qiu demonstrates in discomforting detail this is a story that involves exploitative labor practices, enforced overtime, abusive managers, insufficient living quarters, and wage theft, in a system that he argues is similar to slavery.
Launched by activists in 2010, the “iSlave” campaign aimed to raise awareness about the labor conditions that had led to a wave of suicides amongst Foxconn workers; those performing the labor summed up neatly as “assembled in China.” Seizing upon the campaign’s key term, Qiu aims to expand it “figuratively and literally” to demonstrate that “iSlavery” is “a planetary system of domination, exploitation, and alienation…epitomized by the material and immaterial structures of capital accumulation” (9). This in turn underscores the “world system of gadgets” that Qiu refers to as “Appconn” (13); a system that encompasses those who “designed” the devices, those who “assembled” them, as well as those who use them. In engaging with the terminology of slavery, Qiu is consciously laying out a provocative argument, but it is a provocation that acknowledges that as smartphones have become commonplace many consumers have become inured to the injustices that allow them to “animoji” themselves. Indeed, it is a reminder that, “Technology does not guarantee progress. It is, instead, often abused to cause regress” (8).
Surveying its history, Qiu notes that slavery has appeared in a variety of forms in many regions throughout history. Though he emphasizes that even today slavery “persists in its classic forms” (21), his focus remains on theoretically expanding the term. Qiu draws upon the League of Nation’s “1926 Slavery Convention” which still acts as the foundation for much contemporary legal thinking on slavery, including the 2012 Bellagio-Harvard Guidelines on the Legal Parameters of Slavery (which Qiu includes in his book as an appendix). These legal guidelines expand the definition of what constitutes slavery to include “institutions and practices similar to slavery” (42). The key element for this updated definition is an understanding that it is no longer legal for a person to be “formally and legally ‘owned’ in any jurisdiction” and thus the concept of slavery requires rethinking (45). In considering which elements from the history of slavery are particularly relevant for the story of “iSlavery,” Qiu emphasizes: how the slave trade made use of advanced technologies of its time (guns, magnetic compasses, slave ships); how the slave trade was linked to creating and satisfying consumer desires (sugar); and how the narrative of resistance and revolt is a key aspect of the history of slavery. For Qiu, “iSlavery” is manifested in two forms: “manufacturing iSlaves” and “manufactured iSlaves.”
In the process of creating high-tech gadgets there are many types of “manufacturing iSlaves,” in conditions similar to slavery “in its classic forms” including “Congolese mine workers” and “Indonesian child labor,” but Qiu focuses primarily on those working for Foxconn in China. Drawing upon news reports, NGO findings, interviews with former workers, underground publications produced by factor workers, and from his experiences visiting these assembly plants, Qiu investigates many ways in which “institutions and practices similar to slavery” shape the lives of Foxconn workers. Insufficient living conditions, low wages that are often not even paid, forced overtime, “student interns” being used as an even cheaper labor force, violently abusive security guards, the arrangement of life so as to maximize disorientation and alienation – these represent some of the common experiences of Foxconn workers. Foxconn found itself uncomfortably in the news in 2010 due to a string of worker suicides, and Qiu sympathetically portrays the conditions that gave rise to such acts, particularly in his interview with Tian Yu who survived her suicide attempt.
As Qiu makes clear, Foxconn workers often have great difficulty leaving the factories, but what exits these factories at a considerable rate are mountains of gadgets that go on to be eagerly purchased and used by the “manufactured iSlaves.” The transition to the “manufactured iSlave” entails “a conceptual leap” (91) that moves away from the “practices similar to slavery” that define the “manufacturing iSlave” to instead signify “those who are constantly attached to their gadgets” (91). Here the compulsion takes on the form of a vicious consumerism that has resulted in an “addiction” to these gadgets, and a sense in which these gadgets have come to govern the lives of their users. Drawing upon the work of Judy Wajcman, Qiu notes that “manufactured iSlaves” (Qiu’s term) live under the aegis of “iTime” (Wajcman’s term), a world of “consumerist enslavement” into which they’ve been drawn by “Net Slaves” (Steve Baldwin and Bill Lessard’s term of “accusation and ridicule” for those whose jobs fit under the heading “Designed in California”). While some companies have made fortunes off the material labor of “manufacturing iSlaves,” Qiu emphasizes that many companies that have made their fortunes off the immaterial labor of legions of “manufactured iSlaves” dutifully clicking “like,” uploading photos, and hitting “tweet” all without any expectation that they will be paid for their labor. Indeed, in Qiu’s analysis, what keeps many “manufactured iSlaves” unaware of their shackles is that they don’t see what they are doing on their devices as labor.
In his description of the history of slavery, Qiu emphasizes resistance, both in terms of acts of rebellion by enslaved peoples, and the broader abolition movement. This informs Qiu’s commentary on pushing back against the system of Appconn. While smartphones may be cast as the symbol of the exploitation of Foxconn workers, Qiu also notes that these devices allow for acts of resistance by these same workers “whose voices are increasingly heard online” (133). Foxconn factories may take great pains to remain closed off from prying eyes, but workers armed with smartphones are “breaching the lines of information lockdown” (148). Campaigns by national and international NGOs can also be important in raising awareness of the plight of Foxconn workers, after all the term “iSlave” was originally coined as part of such a campaign. In bringing awareness of the “manufacturing iSlave” to the “manufactured iSlave” Qiu points to “culture jamming” responses such as the “Phone Story” game which allows people to “play” through their phones vainglorious tale (ironically the game was banned from Apple’s app store). Qiu also points to the attempt to create ethical gadgets, such as the Fairphone which aims to responsibly source its minerals, pay those who assemble their phones a living wage, and push back against the drive of planned obsolescence. As Qiu makes clear, there are many working to fight against the oppression built into Appconn.
“For too long,” Qiu notes, “the underbellies of the digital industries have been obscured and tucked away; too often, new media is assumed to represent modernity, and modernity assumed to represent freedom” (172). Qiu highlights the coercion and misery that are lurking below the surface of every silly cat picture uploaded on Instagram, and he questions whether the person doing the picture taking and uploading is also being exploited. A tough and confrontational book, Goodbye iSlave nevertheless maintains hope for meaningful resistance.
Anyone who has used a smartphone, tablet, laptop computer, e-reader, video game console, or smart speaker would do well to read Goodbye iSlave. In tight effective prose, Qiu presents a gripping portrait of the lives of Foxconn workers and this description is made more confrontational by the uncompromising language Qiu deploys. And though Qiu begins his book by noting that “the outlook of manufacturing and manufactured iSlaves is rather bleak” (18), his focus on resistance gives his book the feeling of an activist manifesto as opposed to the bleak tonality of a woebegone dirge. By engaging with the exploitation of material labor and immaterial labor, Qiu is, furthermore, able to uncomfortably remind his readers not only that their digital freedom comes at a human cost, but that digital freedom may itself be a sort of shackle.
In the book’s concluding chapter, Qiu notes that he is “fully aware that slavery is a very severe critique” (172), and this represents one of the greatest challenges the book poses. Namely: what to make of Qiu’s use of the term slavery? As Qiu demonstrates, it is not a term that he arrived at simply for shock value, nevertheless, “slavery” is itself a complicated concept. Slavery carries a history of horrors that make one hesitant to deploy it in a simplistic fashion even as it remains a basic term of international law. By couching his discussion of “iSlavery” both in terms of history and contemporary legal thinking, Qiu demonstrates a breadth of sensitivity and understanding regarding its nuances. And given the focus of current laws on “institutions and practices similar to slavery” (42) it is hard to dispute that this is a fair description of many of the conditions to which Foxconn workers are subjected – even as Qiu’s comments on coltan miners demonstrates other forms of slavery that lurk behind the shining screens of high-tech society.
Nevertheless, there is frequently something about the use of the term “iSlavery” that seems to diminish the heft of Qiu’s argument. As the term often serves as a stumbling block that pulls a reader away from Qiu’s account; particularly when he tries to make the comparisons too direct such as juxtaposing Foxconn’s (admittedly wretched) dormitories to conditions on slave ships crossing the Atlantic. It’s difficult not to find the comparison hyperbolic. Similarly, Qiu notes that ethnic and regional divisions are often found within Foxconn factories; but these do not truly seem comparable to the racist views that undergirded (and was used to justify) the Atlantic slave trade. Unfortunately, this is a problem that Qiu sets for himself: had he only used “slave” in a theoretical sense it would have opened him to charges of historical insensitivity, but by engaging with the history of slavery many of Qiu’s comparisons seem to miss the mark – and this is exacerbated by the fact that he repeatedly refers to ongoing conditions of “classic” slavery involved in the making of gadgets (such as coltan mining). Qiu provides an important and compelling window into the current legal framing of slavery, and yet, something about the “iSlave” prevents it from fitting into the history of slavery. It is, unfortunately, too easy to imagine someone countering Qiu’s arguments by saying “but this isn’t really slavery” to which the retort of “current law defines slavery as…” will be unlikely to convince.
The matter of “slavery” only gets thornier as Qiu shifts his attention from “manufacturing iSlaves” to “manufactured iSlaves.” In recent years there has been a wealth of writing in the academic and popular sphere that critically asks what our gadgets are doing to us, such as Sherry Turkle’s Alone Together and Judy Wacjman’s Pressed for Time (which Qiu cites). And the fear that technology turns people into “cogs” is hardly new: in his 1956 book The Sane Society, Erich Fromm warned “the danger of the past was that men became slaves. The danger of the future is that men may become robots” (Fromm, 352). Fromm’s anxiety is what one more commonly encounters in discussions about what gadgets turn their users into, but these “robots” are not identical with “slaves.” When Qiu discusses “manufactured iSlaves” he notes that it represents a “conceptual leap,” but by continuing to use the term “slave” this “conceptual leap” unfortunately hampers his broader points about Foxconn workers. The danger is that a sort of false equivalency risks being created in which smartphone users shrug off their complicity in the exploitation of assembly workers by saying, “hey, I’m exploited too.”
Some of this challenge may ultimately simply be about word choice. The very term “iSlave,” despite its activist origins, seems somewhat silly through its linkage to all things to which a lowercase “i” has been affixed. Furthermore, the use of the “i” risks placing all of the focus on Apple. True, Apple products are manufactured in the exploitative Foxconn factories, and Qiu may be on to something in referring to the “Apple cult,” but as Qiu himself notes Foxconn manufactures products for a variety of companies. Just because a device isn’t an “i” gadget, doesn’t mean that it wasn’t manufactured by an “iSlave.” And while Appconn is a nice shorthand for the world that is built upon the backs of both kinds of “iSlaves” it risks being just another opaque neologism for computer dominated society that is undercut by the need for it to be defined.
Given the grim focus of Qiu’s book, it is understandable why he should choose to emphasize rebellion and resistance, and these do allow readers to put down the book feeling energized. Yet some of these modes of resistance seem to risk more entanglement than escape. There is a risk that the argument that Foxconn workers can use smartphones to organize simply fits neatly back into the narrative that there is something “inherently liberating” about these devices. The “Phone Story” game may be a good teaching tool, but it seems to make a similar claim on the democratizing potential of the Internet. And while the Fairphone represents, perhaps, one of the more significant ways to get away from subsidizing Appconn it risks being just an alternative for concerned consumers not a legally mandated industry standard. At risk of an unfair comparison, a Fairphone seems like the technological equivalent of free range eggs purchased at the farmer’s market – it may genuinely be ethically preferable, but it risks reducing a major problem (iSlavery) into yet another site for consumerism (just buy the right phone). In fairness, these are the challenges inherent in critiquing the dominant order; as Theodor Adorno once put it “we live on the culture we criticize” (Adorno and Horkheimer, 105). It might be tempting to wish that Qiu had written an Appconn version of Jerry Mander’s Four Arguments for the Elimination of Television, but Qiu seems to recognize that simply telling people to turn it all off is probably just as efficacious as telling them not to do anything at all. After all, Mander’s “four arguments” may have convinced a few people – but not society as a whole. So, what then does “digital abolition” really mean?
In describing Goodbye iSlave, Qiu notes that it is “nothing more than an invitation—for everyone to reflect on the enslaving tendencies of Appconn and the world system of gadgets” it is an opportunity for people to reflect on the ways in which “so many myths of liberation have been bundled with technological buzzwords, and they are often taken for granted” (173). It is a challenging book and an important one, and insofar as it forces readers to wrestle with Qiu’s choice of terminology it succeeds by making them seriously confront the regimes of material and immaterial labor that structure their lives. While the use of the term “slavery” may at times hamper Qiu’s larger argument, this unflinching look at the labor behind today’s gadgets should not be overlooked.
Goodbye iSlave frames itself as “a manifesto for digital abolition,” but what it makes clear is that this struggle ultimately isn’t about “i” but about “us.”
_____
Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.
By Audrey Watters
~
After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…
In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.
Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.
But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?
School as “Skills Training”
In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.
I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”
But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.
There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.
Nor is the promotion of a more business-focused education that new either.
Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.
The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.
The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”
That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.
It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.
Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).
It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.
Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.
That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,
The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.
Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.
Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.
According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.
For-Profit Higher Ed: Who’s Being Served?
The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)
The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.
According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.
That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)
The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.
Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):
It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?
Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)
Deming, Goldin, and Katz argue that
The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.
According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.
For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.
What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.
EQUIP and the New For-Profit Higher Ed
On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”
The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.
By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.
Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)
Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.
Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.
And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.
Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.
Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.
This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
a review of N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)
by R. Joshua Scannell
~
In How We Think, N Katherine Hayles addresses a number of increasingly urgent problems facing both the humanities in general and scholars of digital culture in particular. In keeping with the research interests she has explored at least since 2002’s Writing Machines (MIT Press), Hayles examines the intersection of digital technologies and humanities practice to argue that contemporary transformations in the orientation of the University (and elsewhere) are attributable to shifts that ubiquitous digital culture have engendered in embodied cognition. She calls this process of mutual evolution between the computer and the human technogenesis (a term that is mostly widely associated with the work of Bernard Stiegler, although Hayles’s theories often aim in a different direction from Stiegler’s). Hayles argues that technogenesis is the basis for the reorientation of the academy, including students, away from established humanistic practices like close reading. Put another way, not only have we become posthuman (as Hayles discusses in her landmark 1999 University of Chicago Press book, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics), but our brains have begun to evolve to think with computers specifically and digital media generally. Rather than a rearguard eulogy for the humanities that was, Hayles advocates for an opening of the humanities to digital dromology; she sees the Digital Humanities as a particularly fertile ground from which to reimagine the humanities generally.
Hayles is an exceptional scholar, and while her theory of technogenesis is not particularly novel, she articulates it with a clarity and elegance that are welcome and useful in a field that is often cluttered with good ideas, unintelligibly argued. Her close engagement with work across a range of disciplines – from Hegelian philosophy of mind (Catherine Malabou) to theories of semiosis and new media (Lev Manovich) to experimental literary production – grounds an argument about the necessity of transmedial engagement in an effective praxis. Moreover, she ably shifts generic gears over the course of a relatively short manuscript, moving from quasi-ethnographic engagement with University administrators, to media archaeology a la Friedrich Kittler, to contemporary literary theory, with grace. Her critique of the humanities that is, therefore, doubles as a praxis: she is actually producing the discipline-flouting work that she calls on her colleagues to pursue.
The debate about the death and/or future of the humanities is weather worn, but Hayles’s theory of technogenesis as a platform for engaging in it is a welcome change. For Hayles, the technogenetic argument centers on temporality, and the multiple temporalities embedded in computer processing and human experience. She envisions this relation as cybernetic, in which computer and human are integrated as a system through the feedback loops of their coemergent temporalities. So, computers speed up human responses, which lag behind innovations, which prompt beta test cycles at quicker rates, which demand humans to behave affectively, nonconsciously. The recursive relationship between human duration and machine temporality effectively mutates both. Humanities professors might complain that their students cannot read “closely” like they used to, but for Hayles this is a fault of those disciplines to imagine methods in step with technological changes. Instead of digital media making us “dumber” by reducing our attention spans, as Nicholas Carr argues, Hayles claims that the movement towards what she calls “hyper reading” is an ontological and biological fact of embodied cognition in the age of digital media. If “how we think” were posed as a question, the answer would be: bodily, quickly, cursorily, affectively, non-consciously.
Hayles argues that this doesn’t imply an eliminative teleology of human capacity, but rather an opportunity to think through novel, expansive interventions into this cyborg loop. We may be thinking (and feeling, and experiencing) differently than we used to, but this remains a fact of human existence. Digital media has shifted the ontics of our technogenetic reality, but it has not fundamentally altered its ontology. Morphological biology, in fact, entails ontological stability. To be human, and to think like one, is to be with machines, and to think with them. The kids, in other words, are all right.
This sort of quasi-Derridean or Stieglerian Hegelianism is obviously not uncommon in media theory. As Hayles deploys it, this disposition provides a powerful framework for thinking through the relationship of humans and machines without ontological reductivism on either end. Moreover, she engages this theory in a resolutely material fashion, evading the enervating tendency of many theorists in the humanities to reduce actually existing material processes to metaphor and semiosis. Her engagement with Malabou’s work on brain plasticity is particularly useful here. Malabou has argued that the choice facing the intellectual in the age of contemporary capitalism is between plasticity and self-fashioning. Plasticity is a quintessential demand of contemporary capitalism, whereas self-fashioning opens up radical possibilities for intervention. The distinction between these two potentialities, however, is unclear – and therefore demands an ideological commitment to the latter. Hayles is right to point out that this dialectic insufficiently accounts for the myriad ways in which we are engaged with media, and are in fact produced, bodily, by it.
But while Hayles’ critique is compelling, the responses she posits may be less so. Against what she sees as Malabou’s snide rejection of the potential of media, she argues
It is precisely because contemporary technogenesis posits a strong connection between ongoing dynamic adaptation of technics and humans that multiple points of intervention open up. These include making new media…adapting present media to subversive ends…using digital media to reenvision academic practices, environments and strategies…and crafting reflexive representations of media self fashionings…that call attention to their own status as media, in the process raising our awareness of both the possibilities and dangers of such self-fashioning. (83)
With the exception of the ambiguous labor done by the word “subversive,” this reads like a catalog of demands made by administrators seeking to offload ever-greater numbers of students into MOOCs. This is unfortunately indicative of what is, throughout the book, a basic failure to engage with the political economics of “digital media and contemporary technogenesis.” Not every book must explicitly be political, and there is little more ponderous than the obligatory, token consideration of “the political” that so many media scholars feel compelled to make. And yet, this is a text that claims to explain “how” “we” “think” under post-industrial, cognitive capitalism, and so the lack of this engagement cannot help but show.
Universities across the country are collapsing due to lack of funding, students are practically reduced to debt bondage to cope with the costs of a desperately near-compulsory higher education that fails to deliver economic promises, “disruptive” deployment of digital media has conjured teratic corporate behemoths that all presume to “make the world a better place” on the backs of extraordinarily exploited workforces. There is no way for an account of the relationship between the human and the digital in this capitalist context not to be political. Given the general failure of the book to take these issues seriously, it is unsurprising that two of Hayles’ central suggestions for addressing the crisis in the humanities are 1) to use voluntary, hobbyist labor to do the intensive research that will serve as the data pool for digital humanities scholars and 2) to increasingly develop University partnerships with major digital conglomerates like Google.
This reads like a cost-cutting administrator’s fever dream because, in the chapter in which Hayles promulgates novel (one might say “disruptive”) ideas for how best to move the humanities forward, she only speaks to administrators. There is no consideration of labor in this call for the reformation of the humanities. Given the enormous amount of writing that has been done on affective capitalism (Clough 2008), digital labor (Scholz 2012), emotional labor (Van Kleaf 2015), and so many other iterations of exploitation under digital capitalism, it boggles the mind a bit to see an embrace of the Mechanical Turk as a model for the future university.
While it may be true that humanities education is in crisis – that it lacks funding, that its methods don’t connect with students, that it increasingly must justify its existence on economic grounds – it is unclear that any of these aspects of the crisis are attributable to a lack of engagement with the potentials of digital media, or the recognition that humans are evolving with our computers. All of these crises are just as plausibly attributable to what, among many others, Chandra Mohanty identified ten years ago as the emergence of the corporate university, and the concomitant transformation of the mission of the university from one of fostering democratic discourse to one of maximizing capital (Mohanty 2003). In other words, we might as easily attribute the crisis to the tightening command that contemporary capitalist institutions have over the logic of the university.
Humanities departments are underfunded precisely because they cannot – almost by definition – justify their existence on monetary grounds. When students are not only acculturated, but are compelled by financial realities and debt, to understand the university as a credentialing institution capable of guaranteeing certain baseline waged occupations – then it is no surprise that they are uninterested in “close reading” of texts. Or, rather, it might be true that students’ “hyperreading” is a consequence of their cognitive evolution with machines. But it is also just as plausibly a consequence of the fact that students often are working full time jobs while taking on full time (or more) course loads. They do not have the time or inclination to read long, difficult texts closely. They do not have the time or inclination because of the consolidating paradigm around what labor, and particularly their labor, is worth. Why pay for a researcher when you can get a hobbyist to do it for free? Why pay for a humanities line when Google and Wikipedia can deliver everything an institution might need to know?
In a political economy in which Amazon’s reduction of human employees to algorithmically-managed meat wagons is increasingly diagrammatic and “innovative” in industries from service to criminal justice to education, the proposals Hayles is making to ensure the future of the university seem more fifth columnary that emancipatory.
This stance also evacuates much-needed context from what are otherwise thoroughly interesting, well-crafted arguments. This is particularly true of How We Think’s engagement with Lev Manovich’s claims regarding narrative and database. Speaking reductively, in The Language of New Media(MIT Press, 2001), Manovich argued that under there are two major communicative forms: narrative and database. Narrative, in his telling, is more or less linear, and dependent on human agency to be sensible. Novels and films, despite many modernist efforts to subvert this, tend toward narrative. The database, as opposed to the narrative, arranges information according to patterns, and does not depend on a diachronic point-to-point communicative flow to be intelligible. Rather, the database exists in multiple temporalities, with the accumulation of data for rhizomatic recall of seemingly unrelated information producing improbable patterns of knowledge production. Historically, he argues, narrative has dominated. But with the increasing digitization of cultural output, the database will more and more replace narrative.
Manovich’s dichotomy of media has been both influential and roundly criticized (not least by Manovich himself in Software Takes Command, Bloomsbury 2013) Hayles convincingly takes it to task for being reductive and instituting a teleology of cultural forms that isn’t borne out by cultural practice. Narrative, obviously, hasn’t gone anywhere. Hayles extends this critique by considering the distinctive ways space and time are mobilized by database and narrative formations. Databases, she argues, depend on interoperability between different software platforms that need to access the stored information. In the case of geographical information services and global positioning services, this interoperability depends on some sort of universal standard against which all information can be measured. Thus, Cartesian space and time are inevitably inserted into database logics, depriving them of the capacity for liveliness. That is to say that the need to standardize the units that measure space and time in machine-readable databases imposes a conceptual grid on the world that is creatively limiting. Narrative, on the other hand, does not depend on interoperability, and therefore does not have an absolute referent against which it must make itself intelligible. Given this, it is capable of complex and variegated temporalities not available to databases. Databases, she concludes, can only operate within spatial parameters, while narrative can represent time in different, more creative ways.
As an expansion and corrective to Manovich, this argument is compelling. Displacing his teleology and infusing it with a critique of the spatio-temporal work of database technologies and their organization of cultural knowledge is crucial. Hayles bases her claim on a detailed and fascinating comparison between the coding requirements of relational databanks and object-oriented databanks. But, somewhat surprisingly, she takes these different programming language models and metonymizes them as social realities. Temporality in the construction of objects transmutes into temporality as a philosophical category. It’s unclear how this leap holds without an attendant sociopolitical critique. But it is impossible to talk about the cultural logic of computation without talking about the social context in which this computation emerges. In other words, it is absolutely true that the “spatializing” techniques of coders (like clustering) render data points as spatial within the context of the data bank. But it is not an immediately logical leap to then claim that therefore databases as a cultural form are spatial and not temporal.
Further, in the context of contemporary data science, Hayles’s claims about interoperability are at least somewhat puzzling. Interoperability and standardized referents might be a theoretical necessity for databases to be useful, but the ever-inflating markets around “big data,” data analytics, insights, overcoming data siloing, edge computing, etc, demonstrate quite categorically that interoperability-in-general is not only non-existent, but is productively non-existent. That is to say, there are enormous industries that have developed precisely around efforts to synthesize information generated and stored across non-interoperable datasets. Moreover, data analytics companies provide insights almost entirely based on their capacity to track improbably data patterns and resonances across unlikely temporalities.
Far from a Cartesian world of absolute space and time, contemporary data science is a quite posthuman enterprise in committing machine learning to stretch, bend and strobe space and time in order to generate the possibility of bankable information. This is both theoretically true in the sense of setting algorithms to work sorting, sifting and analyzing truly incomprehensible amounts of data and materially true in the sense of the massive amount of capital and labor that is invested in building, powering, cooling, staffing and securing data centers. Moreover, the amount of data “in the cloud” has become so massive that analytics companies have quite literally reterritorialized information– particularly trades specializing in high frequency trading, which practice “co- location,” locating data centers geographically closer the sites from which they will be accessed in order to maximize processing speed.
Data science functions much like financial derivatives do (Martin 2015). Value in the present is hedged against the probable future spatiotemporal organization of software and material infrastructures capable of rendering a possibly profitable bundling of information in the immediate future. That may not be narrative, but it is certainly temporal. It is a temporality spurred by the queer fluxes of capital.
All of which circles back to the title of the book. Hayles sets out to explain How We Think. A scholar with such an impeccable track record for pathbreaking analyses of the relationship of the human to technology is setting a high bar for herself with such a goal. In an era in which (in no small part due to her work) it is increasingly unclear who we are, what thinking is or how it happens, it may be an impossible bar to meet. Hayles does an admirable job of trying to inject new paradigms into a narrow academic debate about the future of the humanities. Ultimately, however, there is more resting on the question than the book can account for, not least the livelihoods and futures of her current and future colleagues.
_____
R Joshua Scannell is a PhD candidate in sociology at the CUNY Graduate Center. His current research looks at the political economic relations between predictive policing programs and urban informatics systems in New York City. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).
Catherine Malabou. 2008. What Should We Do with Our Brain? New York: Fordham University Press
Lev Manovich. 2001. The Language of New Media. Cambridge: MIT Press.
Lev Manovich. 2009. Software Takes Command. London: Bloomsbury
Randy Martin. 2015. Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia: Temple University Press
Chandra Mohanty. 2003. Feminism Without Borders: Decolonizing Theory, Practicing Solidarity. Durham: Duke University Press.
Trebor Scholz, ed. 2012. Digital Labor: The Internet as Playground and Factory. New York: Routledge
Bernard Stiegler. 1998. Technics and Time, 1: The Fault of Epimetheus. Palo Alto: Stanford University Press
Kara Van Cleaf. 2015. “Of Woman Born to Mommy Blogged: The Journey from the Personal as Political to the Personal as Commodity.” Women’s Studies Quarterly 43(3/4) 247-265
When I first started to think about what I wanted to say here today, I thought I’d talk about innovation and how confused if not backwards the ed-tech industry’s obsession with that term is. I thought I’d tie in Jon Udell’s notion of “trailing edge innovations,” this idea that some of the most creating and interesting things don’t happen on the bleeding edge; they’re at a different perpendicular, if you will. Scratch – and before Scratch, LOGO – work there, tinkering from that angle.
So I started to think about movements from margin to center, about cultural, social, political, pedagogical change and why, from my vantage point at least, ed-tech is stuck – stuck chasing the wrong sorts of change.
We’ve been stuck there a while.
This is me and my brother, circa Christmas 1984. (I know it’s Christmas because that’s when we got the computer, and in this photo it hasn’t yet been moved to the basement.) We found this photo when we were cleaning out our dad’s house this summer. Yes, that’s us and the LOGO turtle. My thoughts about this photo are pretty complicated: going through family photo albums, you can see – sometimes quite starkly – when things change or when things get stuck. This photo was from “the good times”; later images, not so much. And this photo reminds me too of a missing piece: somehow my interest in computers then never really went anywhere. I didn’t have programming opportunities at school, and other than what I could tinker with on my own, I did t get much farther than basic (sic).
Stuck.
So I want to talk to you today about how we – ed-tech – get unstuck.
Someone asked me the other day why I’d been invited to speak at a conference on Scratch. “What are you going to say?!” they asked, (I think) a little apprehensively. Their fear, I have to imagine, was that I was going to come here and unload a keynote equivalent of 1984’s “Two Minutes of Hate” on an unsuspecting European audience, that I would shake my fist angrily and loudly condemn the Scratch Cat or something. Or something.
I get this a lot: demands that I answer the question “why do you hate education technology so much, Audrey?” in which I usually refrain from responding with the question “why do you hate reading comprehension so much, Internet stranger?”
I’d contend that this nervous, sometimes hostile reaction to my work highlights a trap that education technology finds itself in – a ridiculous belief that there can be only two possible responses to computers in education (or to computers in general): worship or hatred, adulation or acquiescence. “You’re either with us or against us”; you’re either for computers or against computers. You have to choose: technological progress or Luddism.
It’s a false choice, of course, and it mostly misses the point of what I try to do in my work as an education technology writer. Often what I’m trying to analyze is not so much about the actual technology at all: it’s about the ideology in which the technology is embedded, encased and from which it emerges; and it’s about what shape technologies seem to think teaching and learning, and the institutions that influence if not control those, should take.
To fixate solely on the technology is a symptom of what Seymour Papert has called “technocentric thinking,” something that he posited as quite different from what technology criticism should do. Technocentrism is something that technologists fall prey to, Papert contended; but it’s something that, just as likely, humanists are guilty of (admittedly, that’s another unhelpful divide, no doubt: technologists versus humanists).
“Combating technocentrism involves more than thinking about technology,” Papert wrote. And surely this is what education technology desperately needs right now. Why, for example, is there all the current excitement about ed-tech? Surely we can do better than an answer that accepts “because computers really matter now.” Why are venture capitalists investing in ed-tech at record levels? Why are schools now buying new hardware and software? Try again if your answer is “because the tech is so good.” A technocentric response points our attention to the technology itself – new tools, data, devices, apps, broadband, the cloud – as though these are context-free. Computer criticism, as outlined by Papert, demands we look more closely instead at policies, profits, politics, practices, power. Because it’s not “technological progress” than demands schools use computers. Indeed, rarely are computers used there for progressive means or ends at all.
Challenging technocentrism “leads to fundamental re-examination of assumptions about the area of application of technology with which one is concerned,” Papert wrote. “If we are interested in eliminating technocentrism from thinking about computers in education, we may find ourselves having to re-examine assumptions about education that were made long before the advent of computers.”
These passages comes from a 1987 essay “Computer Criticism vs. Technocentric Thinking,” in which Papert posited that education technology – or rather, the LOGO community specifically – needed to better develop its voice so that it could weigh in on the public dialogue about the burgeoning adoption of computers in schools. But what should that voice sound like? It had to offer more than a simple “pro-computers in the classroom” stance. And some three decades later, I think this is even more crucial. Uncritical techno-fascination and ed-tech fetishization – honestly, what purpose do those serve?
“There is no shortage of models” in trying to come up with a robust framework for computer criticism, Papert wrote back then. “The education establishment offers the notion of evaluation. Educational psychologists offer the notion of controlled experiment. The computer magazines have developed the idiom of product review. Philosophical tradition suggests inquiry into the essential nature of computation.” We can still see (mostly) these models applied to ed-tech today: “does it raise standardized test scores?” is one common way to analyze a product or service. “What new features does it boast?” is another. These approaches are insufficient, Papert argued, when it comes to thinking about ed-tech’s influence on learning, because they do nothing in helping us think broadly – rethink – our education system.
Papert suggested we turn to literary and social criticism as a model for computer criticism. Indeed, the computer is a medium of human expression, its development and its use a reflection of human culture; the computer is also a tool with a particular history, and although not circumscribed by its past, the computer is not entirely free of it either. I think we recognize history, legacy, systems in literary and social criticism; funny, folks get pretty irate when I point those out about ed-tech. “The name [computer criticism] does not imply that such writing would condemn computers any more than literary criticism condemns literature or social criticism condemns society,” Papert wrote. “The purpose of computer criticism is not to condemn but to understand, to explicate, to place in perspective. Of course, understanding does not exclude harsh (perhaps even captious) judgment. The result of understanding may well be to debunk. But critical judgment may also open our eyes to previously unnoticed virtue. And in the end, the critical and the creative processes need each other.”
I am, admittedly, quite partial to this framing of “computer criticism,” since it dovetails neatly with my own academic background. I’m not an engineer or an entrepreneur or (any longer) a classroom educator. I see myself as a cultural critic, formally trained in the study of literature, language, folklore. I’m interested in our stories and in our practices and in our cultures.
One of the flaws Papert identifies in “technocentrism” is that it gives centrality to the technology itself, reducing people and culture to a secondary level. Instead “computer criticism” should look at context, at systems, at politics, at power.
I would add to Papert’s ideas about “computer criticism,” those of other theorists. Consider Kant: criticism is self-knowledge, reflection, a counter to dogma, to those ideas that powerful systems demand we believe in. Ed-tech, once at the margins, is surely now dogma. Consider Hegel; consider Marx: criticism as antagonism, as dialectic, as intervention – stake a claim; stake a position; identify ideology. Consider Freire and criticism as pedagogy and pedagogy as criticism: change the system of schooling, and change the world.
It’s an odd response to my work, but a common one too, that criticism does not enable or effect change. (I suppose it does not fall into the business school model of “disruptive innovation.”) Or rather, that criticism stands as an indulgent, intellectual, purely academic pursuit – as though criticism involves theory but not action. Or if there is action, criticism implies “tearing down”; it has this negative connotation. Ed-tech entrepreneurs, to the contrary, actually “build things.”
Here’s another distinction I’ve heard: that criticism (in the form of writing an essay) is “just words” but writing software is “actually doing something.” Again, such a contrast reveals much about the role of intellectual activity that some see in “coding.”
That is a knotty problem, I think, for a group like this one to wrestle with (and why we need ed-tech criticism!). If we believe in “coding to learn” then what does it mean if we see “code” as distinct from or as absent of criticism? And here I don’t simply mean that a criticism-free code is stripped of knowledge, context, and politics; I mean that that framework in some ways conceptualizes code as the opposite of thinking deeply or thinking critically – that is, coding as (only) programmatic, mechanical, inflexible, rules-based. What are the implications of that in schools?
Technocentrism won’t help with thinking through that question. Technocentrism would be happier talking about “learning to code,” with the emphasis on “code” – “code” largely a signifier for technological know-how, an inherent and unexamined good.
As I was rereading Papert’s 1987 essay in preparation for this talk, I was struck – as I often am by his work – of how stuck ed-tech is. I mean, here he is, some 30 years ago, calling for the LOGO community to develop a better critique, frankly an activist critique about thinking and learning. “Do Not Ask What LOGO Can Do To People, But What People Can Do With LOGO.” Papert’s argument is not “why everyone should learn to code.”
Papert offers an activist critique. Criticism is activism. Criticism is a necessary tactic for this community, the Scratch community specifically and for the ed-tech community in general. It was necessary in 1987. It’s still necessary today – we might consider why we’re still at the point of having to make a case for ed-tech criticism too. It’s particularly necessary as we see funding flood into ed-tech, as we see policies about testing dictate the rationale for adopting devices, as we see the technology industry shape a conversation about “code” – a conversation that focuses on money and prestige but not on thinking, learning. Computer criticism can – and must – be about analysis and action. Critical thinking must work alongside critical pedagogical and technological practices. “Coding to learn” if you want to start there; or more simply, “learn by making.” But then too: making to reflect; making to think critically; making to engage with the world; it is from there, and only there, that we can get to making and coding to change the world.
Without ed-tech criticism, we’ll still be stuck – stuck without these critical practices, stuck without critical making or coding or design in school, stuck without critical (digital) pedagogy. And likely we’ll be stuck with a technocentrism that masks rather than uncovers let alone challenges power.
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
a review of Judy Wajcman, Pressed for Time: The Acceleration of Life in Digital Capitalism (Chicago, 2014)
by Zachary Loeb
~
Patience seems anachronistic in an age of high speed downloads, same day deliveries, and on-demand assistants who can be summoned by tapping a button. Though some waiting may still occur the amount of time spent in anticipation seems to be constantly diminishing, and every day a new bevy of upgrades and devices promise that tomorrow things will be even faster. Such speed is comforting for those who feel that they do not have a moment to waste. Patience becomes a luxury for which we do not have time, even as the technologies that claimed they would free us wind up weighing us down.
Yet it is far too simplistic to heap the blame for this situation on technology, as such. True, contemporary technologies may be prominent characters in the drama in which we are embroiled, but as Judy Wajcman argues in her book Pressed for Time, we should not approach technology as though it exists separately from the social, economic, and political factors that shape contemporary society. Indeed, to understand technology today it is necessary to recognize that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires” (3). In Wajcman’s view, technology is not the true culprit, nor is it an out-of-control menace. It is instead a convenient distraction from the real forces that make it seem as though there is never enough time.
Wajcman sets a course that refuses to uncritically celebrate technology, whilst simultaneously disavowing the damning of modern machines. She prefers to draw upon “a social shaping approach to technology” (4) which emphasizes that the shape technology takes in a society is influenced by many factors. If current technologies leave us feeling exhausted, overwhelmed, and unsatisfied it is to our society we must look for causes and solutions – not to the machine.
The vast array of Internet-connected devices give rise to a sense that everything is happening faster, that things are accelerating, and that compared to previous epochs things are changing faster. This is the kind of seemingly uncontroversial belief that Wajcman seeks to counter. While there is a present predilection for speed, the ideas of speed and acceleration remain murky, which may not be purely accidental when one considers “the extent to which the agenda for discussing the future of technology is set by the promoters of new technological products” (14). Rapid technological and societal shifts may herald the emergence of a “acceleration society” wherein speed increases even as individuals experience a decrease of available time. Though some would describe today’s world (at least in affluent nations) as being a synecdoche of the “acceleration society,” it would be a mistake to believe this to be a wholly new invention.
Nevertheless the instantaneous potential of information technologies may seem to signal a break with the past – as the sort of “timeless time” which “emerged in financial markets…is spreading to every realm” (19). Some may revel in this speed even as others put out somber calls for a slow-down, but either approach risks being reductionist. Wajcman pushes back against the technological determinism lurking in the thoughts of those who revel and those who rebel, noting “that all technologies are inherently social in that they are designed, produced, used and governed by people” (27).
Both today and yesterday “we live our lives surrounded by things, but we tend to think about only some of them as being technologies” (29). The impacts of given technologies depend upon the ways in which they are actually used, and Wajcman emphasizes that people often have a great deal of freedom in altering “the meanings and deployment of technologies” (33).
Over time certain technologies recede into the background, but the history of technology is of a litany of devices that made profound impacts in determining experiences of time and speed. After all, the clock is itself a piece of technology, and thus we assess our very lack of time by looking to a device designed to measure its passage. The measurement of time was a technique used to standardize – and often exploit – labor, and the ability to carefully keep track of time gave rise to an ideology in which time came to be interchangeable with money. As a result speed came to be associated with profit even as slowness became associated with sloth. The speed of change became tied up in notions of improvement and progress, and thus “the speed of change becomes a self-evident good” (44). The speed promised by inventions are therefore seen as part of the march of progress, though a certain irony emerges as widespread speed leads to new forms of slowness – the mass diffusion of cars leading to traffic jams, And what was fast yesterday is often deemed slow today. As Wajcman shows, the experience of time compression that occurs tied to “our valorization of a busy lifestyle, as well as our profound ambivalence toward it” (58), has roots that go far back.
Time takes on an odd quality – to have it is a luxury, even as constant busyness becomes a sign of status. A certain dissonance emerges wherein individuals feel that they have less time even as studies show that people are not necessarily working more hours. For Wajcman much of the explanation is related to “real increases in the combined work commitments of family members as it is about changes in the working time of individuals” with such “time poverty” being experienced particularly acutely “among working mothers, who juggle work, family, and leisure” (66). To understand time pressure it is essential to consider the degree to which people are free to use their time as they see fit.
Societal pressures on the time of men and women differ, and though the hours spent doing paid labor may not have shifted dramatically, the hours parents (particularly mothers) spend performing unpaid labor remains high. Furthermore, “despite dramatic improvements in domestic technology, the amount of time spent on household tasks has not actually shown any corresponding dramatic decline” (68). Though household responsibilities can be shared equitably between partners, much of the onus still falls on women. As a busy event-filled life becomes a marker of status for adults so too may they attempt to bestow such busyness on the whole family, but busy parents needing to chaperone and supervise busy children only creates a further crunch on time. As Wajcman notes “perhaps we should be giving as much attention to the intensification of parenting as to the intensification of work” (82).
Yet the story of domestic, unpaid and unrecognized, labor is a particularly strong example of a space wherein the promises of time-saving technological fixes have fallen short. Instead, “devices allegedly designed to save labor time fail to do so, and in some cases actually increase the time needed for the task” (111). The variety of technologies marketed for the household are often advertised as time savers, yet altering household work is not the same as eliminating it – even as certain tasks continually demand a significant investment of real time.
Many of the technologies that have become mainstays of modern households – such as the microwave – were not originally marketed as such, and thus the household represents an important example of the way in which technologies “are both socially constructed and society shaping” (122). Of further significance is the way in which changing labor relations have also lead to shifts in the sphere of domestic work, wherein those who can afford it are able to buy themselves time through purchasing food from restaurants or by employing others for tasks such as child care and cleaning. Though the image of “the home of the future,” courtesy of the Internet of Things, may promise an automated abode, Wajcman highlights that those making and selling such technologies replicate society’s dominant blind spot for the true tasks of domestic labor. Indeed, the Internet of Things tends to “celebrate technology and its transformative power at the expense of home as a lived practice.” (130) Thus, domestic technologies present an important example of the way in which those designing and marketing technologies instill their own biases into the devices they build.
Beyond the household, information communications technologies (ICTs) allow people to carry their office in their pocket as e-mails and messages ping them long after the official work day has ended. However, the idea “of the technologically tethered worker with no control over their own time…fails to convey the complex entanglement of contemporary work practices, working time, and the materiality of technical artifacts” (88). Thus, the problem is not that an individual can receive e-mail when they are off the clock, the problem is the employer’s expectation that this worker should be responding to work related e-mails while off the clock – the issue is not technological, it is societal. Furthermore, Wajcman argues, communications technologies permit workers to better judge whether or not something is particularly time sensitive. Though technology has often been used by employers to control employees, approaching communications technologies from an STS position “casts doubt on the determinist view that ICTs, per se, are driving the intensification of work” (107). Indeed some workers may turn to such devices to help manage this intensification.
Technologies offer many more potentialities than those that are presented in advertisements. Though the ubiquity of communications devices may “mean that more and more of our social relationships are machine-mediated” (138), the focus should be as much on the word “social” as on the word “machine.” Much has been written about the way that individuals use modern technologies and the ways in which they can give rise to families wherein parents and children alike are permanently staring at a screen, but Wajcman argues that these technologies should “be regarded as another node in the flows of affect that create and bind intimacy” (150). It is not that these devices are truly stealing people’s time, but that they are changing the ways in which people spend the time they have – allowing harried individuals to create new forms of being together which “needs to be understood as adding a dimension to temporal experience” (158) which blurs boundaries between work and leisure.
The notion that the pace of life has been accelerated by technological change is a belief that often goes unchallenged; however, Wajcman emphasizes that “major shifts in the nature of work, the composition of families, ideas about parenting, and patterns of consumption have all contributed to our sense that the world is moving faster than hitherto” (164). The experience of acceleration can be intoxicating, and the belief in a culture of improvement wrought by technological change may be a rare glimmer of positivity amidst gloomy news reports. However, “rapid technological change can actually be conservative, maintaining or solidifying existing social arrangements” (180). At moments when so much emphasis is placed upon the speed of technologically sired change the first step may not be to slow-down but to insist that people consider the ways in which these machines have been socially constructed, how they have shaped society – and if we fear that we are speeding towards a catastrophe than it becomes necessary to consider how they can be socially constructed to avoid such a collision.
* * *
It is common, amongst current books assessing the societal impacts of technology, for authors to present themselves as critical while simultaneously wanting to hold to an unshakable faith in technology. This often leaves such texts in an odd position: they want to advance a radical critique but their argument remains loyal to a conservative ideology. With Pressed for Time, Judy Wajcman, has demonstrated how to successfully achieve the balance between technological optimism and pessimism. It is a great feat, and Pressed for Time executes this task skillfully. When Wajcman writes, towards the end of the book, that she wants “to embrace the emancipatory potential of technoscience to create new meanings and new worlds while at the same time being its chief critic” (164) she is not writing of a goal but is affirming what she has achieved with Pressed for Time (a similar success can be attributed to Wajcman’s earlier books TechnoFeminism (Polity, 2004) and the essential Feminism Confronts Technology (Penn State, 1991).
By holding to the framework of the social shaping of technology, Pressed for Time provides an investigation of time and speed that is grounded in a nuanced understanding of technology. It would have been easy for Wajcman to focus strictly on contemporary ICTs, but what her argument makes clear is that to do so would have been to ignore the facts that make contemporary technology understandable. A great success of Pressed for Time is the way in which Wajcman shows that the current sensation of being pressed for time is not a modern invention. Instead, the emphasis on speed as being a hallmark of progress and improvement is a belief that has been at work for decades. Wajcman avoids the stumbling block of technological determinism and carefully points out that falling for such beliefs leads to critiques being directed incorrectly. Written in a thoroughly engaging style, Pressed for Time is an academic book that can serve as an excellent introduction to the terminology and style of STS scholarship.
Throughout Pressed for Time, Wajcman repeatedly notes the ways in which the meanings of technologies transcend what a device may have been narrowly intended to do. For Wajcman people’s agency is paramount as people have the ability to construct meaning for technology even as such devices wind up shaping society. Yet an area in which one could push back against Wajcman’s views would be to ask if communications technologies have shaped society to such an extent that it is becoming increasingly difficult to construct new meanings for them. Perhaps the “slow movement,” which Wajcman describes as unrealistic for “we cannot in fact choose between fast and slow, technology and nature” (176), is best perceived as a manifestation of the sense that much of technology’s “emancipatory potential” has gone awry – that some technologies offer little in the way of liberating potential. After all, the constantly connected individual may always feel rushed – but they may also feel as though they are under constant surveillance, that their every online move is carefully tracked, and that through the rise of wearable technology and the Internet of Things that all of their actions will soon be easily tracked. Wajcman makes an excellent and important point by noting that humans have always lived surrounded by technologies – but the technologies that surrounded an individual in 1952 were not sending every bit of minutiae to large corporations (and governments). Hanging in the background of the discussion of speed are also the questions of planned obsolescence and the mountains of toxic technological trash that wind up flowing from affluent nations to developing ones. The technological speed experienced in one country is the “slow violence” experienced in another. Though to make these critiques is to in no way to seriously diminish Wajcman’s argument, especially as many of these concerns simply speak to the economic and political forces that have shaped today’s technology.
Pressed for Time is a Rosetta stone for decoding life in high speed, high tech societies. Wajcman deftly demonstrates that the problems facing technologically-addled individuals today are not as new as they appear, and that the solutions on offer are similarly not as wildly inventive as they may seem. Through analyzing studies and history, Wajcman shows the impacts of technologies, while making clear why it is still imperative to approach technology with a consideration of class and gender in mind. With Pressed for Time, Wajcman champions the position that the social shaping of technology framework still provides a robust way of understanding technology. As Wajcman makes clear the way technologies “are interpreted and used depends on the tapestry of social relations woven by age, gender, race, class, and other axes of inequality” (183).
It is an extremely timely argument.
_____
Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.
~
It is often argued that the left is left increasingly unable to speak a convincing narrative in the digital age. Caught between the neoliberal language of contemporary capitalism and its political articulations linked to economic freedom and choice, and a welfare statism that appears counter-intuitively unappealing to modern political voters and supporters, there is often claimed to be a lacuna in the political imaginary of the left. Here, I want to explore a possible new articulation for a left politics that moves beyond the seeming technophilic and technological determinisms of left accelerationisms and the related contradictions of “fully automated luxury communism”. Broadly speaking, these positions tend to argue for a post-work, post-scarcity economy within a post-capitalist society based on automation, technology and cognitive labour. Accepting these are simplifications of the arguments of the proponents of these two positions the aim is to move beyond the assertion that the embracing of technology itself solves the problem of a political articulation that has to be accepted and embraced by a broader constituency within the population. Technophilic politics is not, of itself, going to be enough to convince an electorate, nor a population, to move towards leftist conceptualisations of possible restructuring or post-capitalist economics. However, it seems to me that the abolition of work is not a desirable political programme for the majority of the population, nor does a seemingly utopian notion of post-scarcity economics make much sense under conditions of neoliberal economics. Thus these programmes are simultaneously too radical and not radical enough. I also want to move beyond the staid and unproductive arguments often articulated in the UK between a left-Blairism and a more statist orientation associated with a return to traditional left concerns personified in Ed Miliband.
Instead, I want to consider what a politics of the singularity might be, that is, to follow Fredric Jameson’s conceptualisation of the singularity as “a pure present without a past or a future” such that,
today we no longer speak of monopolies but of transnational corporations, and our robber barons have mutated into the great financiers and bankers, themselves de-individualized by the massive institutions they manage. This is why, as our system becomes ever more abstract, it is appropriate to substitute a more abstract diagnosis, namely the displacement of time by space as a systemic dominant, and the effacement of traditional temporality by those multiple forms of spatiality we call globalization. This is the framework in which we can now review the fortunes of singularity as a cultural and psychological experience (Jameson 2015: 128).
That is the removal of temporality of a specific site of politics as such, or the successful ideological deployment of a new framework of understand of oneself within temporality, whether through the activities of the media industries, or through the mediation of digital technologies and computational media. This has the effect of the transformation of temporal experience into new spatial experiences, whether through translating media, or through the intensification of a now that constantly presses upon us and pushes away both historical time, but also the possibility for political articulations of new forms of futurity. Thus the politics of singularity point to spatiality as the key site of political deployment within neoliberalism, and by this process undercuts the left’s arguments which draw simultaneously on a shared historical memory of hard-won rights and benefits, but also the notion of political action to fight for a better future. Indeed, one might ask if green critique of the anthropocene, with its often misanthropic articulations, in some senses draws on some notion of a singularity produced by humanity which has undercut the time of geological or planetary scale change. The only option remaining then is to seek to radically circumscribe, if not outline a radical social imaginary that does not include humans in its conception, and hence to return the planet to the stability of a geological time structure no longer undermined by human activity. Similarly, neoliberal arguments over political imaginaries highlight the intensity and simultaneity of the present mode of capitalist competition and the individualised (often debt-funded) means of engagement with economic life.
What then might be a politics of the singularity which moved beyond politics that drew on forms of temporality for its legitimation? In other words, how could a politics of spatiality be articulated and deployed which re-enabled the kind of historical project towards a better future for all that was traditionally associated with leftist thought?
To do this I want to think through the notion of the “curator” that Jameson disparagingly thinks is an outcome of the singularity in terms of artistic practice and experience. He argues, that today we are faced with the “emblematic figure of the curator, who now becomes the demiurge of those floating and dissolving constellations of strange objects we still call art.” Further,
there is a nastier side of the curator yet to be mentioned, which can be easily grasped if we look at installations, and indeed entire exhibits in the newer postmodern museums, as having their distant and more primitive ancestors in the happenings of the 1960s—artistic phenomena equally spatial, equally ephemeral. The difference lies not only in the absence of humans from the installation and, save for the curator, from the newer museums as such. It lies in the very presence of the institution itself: everything is subsumed under it, indeed the curator may be said to be something like its embodiment, its allegorical personification. In postmodernity, we no longer exist in a world of human scale: institutions certainly have in some sense become autonomous, but in another they transcend the dimensions of any individual, whether master or servant; something that can also be grasped by reminding ourselves of the dimension of globalization in which institutions today exist, the museum very much included (Jameson 2015: 110-111).
However, Jameson himself makes an important link between spatiality as the site of a contestation and the making-possible of new spaces, something curatorial practice, with its emphasis on the construction, deployment and design of new forms of space points towards. Indeed, Jameson argues in relation to theoretical constructions, “perhaps a kind of curatorial practice, selecting named bits from our various theoretical or philosophical sources and putting them all together in a kind of conceptual installation, in which we marvel at the new intellectual space thereby momentarily produced” (Jameson 2015: 110).
In contrast, the question for me is the radical possibilities suggested by this event-like construction of new spaces, and how they can be used to reverse or destabilise the time-axis manipulation of the singularity. The question then becomes: could we tentatively think in terms of a curatorial political practice, which we might call curatorialism? Indeed, could we fill out the ways in which this practice could aim to articulate, assemble and more importantly provide a site for a renewal and (re)articulation of left politics? How could this politics be mobilised into the nitty-gritty of actual political practice, policy, and activist politics, and engender the affective relation that inspires passion around a political programme and suggests itself to the kinds of singularities that inhabit contemporary society? To borrow the language of the singularity itself, how could one articulate a new disruptive left politics?
At this early stage of thinking, it seems to me that in the first case we might think about how curatorialism points towards the need to move away from concern with internal consistency in the development of a political programme. Curatorialism gathers its strength from the way in which it provides a political pluralism, an assembling of multiple moments into a political constellation that takes into account and articulates its constituent moments. This is the first step in the mapping of the space of a disruptive left politics. This is the development of a spatial politics in as much as, crucially, the programme calls for a weaving together of multiplicity into this constellational form. Secondly, we might think about the way in which this spatial diagram can then be translated into a temporal project, that is the transformation of a mapping program into a political programme linked to social change. This requires the capture and illumination of the multiple movements of each moment and re-articulation through a process of reframing the condition of possibility in each constellational movement in terms of a political economy that draws from the historical possibilities that the left has made possible previously, but also the need for new concepts and ideas to link the political of necessity to the huge capacity of a left project towards mitigating/and or replacement of a neoliberal capitalist economic system. Lastly, it seems to me that to be a truly curatorial politics means to link to the singularity itself as a force of strength for left politics, such that the development of a mode of the articulation of individual political needs, is made possible through the curatorial mode, and through the development of disruptive left frameworks that links individual need, social justice, institutional support, and left politics that reconnects the passions of interests to the passion for justice and equality with the singularity’s concern with intensification.[1] This can, perhaps, be thought of as the replacement of a left project of ideological purity with a return to the Gramscian notions of strategy and tactics through the deployment of what he called a passive revolution, mobilised partially in the new forms of civil society created through collectivities of singularities within social media, computational devices and the new infrastructures of digital capitalism but also within the through older forms of social institutions, political contestations and education.[2]
_____
[1] This remains a tentative articulation that is inspired by the power of knowledge-based economies both to create the conditions of singularity through the action of time-axis manipulation (media technologies), but also their (arguably) countervailing power to provide the tools, spaces and practices for the contestation of the singularity connected only with a neoliberal political moment. That is, how can these new concept and ideas, together with the frameworks that are suggested in their mobilisation, provide new means of contestation, sociality and broader connections of commonality and political praxis.
[2] I leave to a later paper the detailed discussion of the possible subjectivities both in and for themselves within a framework of a curatorial politics. But here I am gesturing towards political parties as the curators of programmes of political goals and ends, able then to use the state as a curatorial enabler of such a political programme. This includes the active development of the individuation of political singularities within such a curatorial framework.
This talk has a few different starting points, which include a forum I held last March on Angela Mitropoulos’ work Contract and Contagion that explored the expansions and reconfigurations of capital, time, and work through the language of Oikonomics or the “properly productive household”, as well as the work that I was doing with Patricia Clough, Josh Scannell, and Benjamin Haber on a paper called “The Datalogical Turn”, which explores how the coupling of large scale databases and adaptive algorithms “are calling forth a new onto-logic of sociality or the social itself” as well as, I confess, no small share of binge-watching the TV show The Good Wife. So, please bear with me as I take you through my thinking here. What I am trying to do in my work of late is a form of feminist thinking that can take quite seriously not only the onto-sociality of data and the ways in which bodily practices are made to extend far and wide beyond the body, but a form of thinking that can also understand the paradox of our times: How and why has digital abundance been ushered in on the heels of massive income inequality and political dispossession? In some ways, the last part of that sentence (why inequality and political dispossession) is actually easier to account for than understanding the role that such “abundance” has played in the reconfiguration or transfers of wealth and power.
So, let me back up her for a minute… Already in 1992, Deleuze wrote that a disciplinary society had give way to a control society. Writing, “we are in a generalized crisis in relation to all the environments of enclosure—prison, hospital, factory, school, family” and that “everyone knows that these institutions are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of the new forces knocking at the door. These are the societies of control, which are in the process of replacing the disciplinary societies.” For Deleuze, whereas the disciplinary man was a “discontinuous producer of energy, the man of control is undulatory, in orbit, in a continuous network.” For such a human, Deleuze wrote, “surfing” has “replaced older sports.”
We know, despite Marx’s theorization of “dead labor”, that digital, networked infrastructures have been active, even “vital”, agents of this shift from discipline to control or the shift from a capitalism of production and property to a capitalism of dispersion, a capitalism fit for circulation, relay, response, and feedback. As Deleuze writes, this is a capitalism fit for a “higher order” of production. I want to intentionally play on the words “higher word”, with their invocations of a religiosity, faith, and hierarchy, because much of our theoretical work of late has been specifically developed to help us understand the ways in which such a “higher order” has been very successful in affectively reconfiguring and reformatting bodies and environments for its own purposes. We talk often of the modulation, pre-emption, extraction, and subsumption of elements once thought to be “immaterial” or spiritual, if you will, the some-“things” that lacked a full instantiation in the material world. I do understand that I am twisting Deleuze’s words here a bit (what he meant in the Postscript was a form of production that we now think as flexible production, production on demand, or JIT production), but my thinking here is that very notion of a higher order, a form of production considered progress in itself, has been very good at making us pray toward the light and at replacing the audial sensations of the church bell/factory clock with the blinding temporality of the speed of light itself. This blinding speed of light is related to what Marx called “circulation time,” or the annihilation of space through time, and it is this black hole of capital, this higher order of production and the ways in which we have theorized its metaphysics, which I want to argue, have become the Via Negativa to a Capital that transcends thought. What I mean here is that this form of theorizing has really left us with a capital beyond reproach, a capital reinstated in and through the effects of what it is not—it is not a wage, it is not found in commodities, it is not ultimately a substance humans have access or rights to…
In such a rapture of the higher order of the light, there has been a tendency to look away from concepts such as “foundations” or “limits” or quaint theories of units such as the “household”, but in Angela Mitropoulos’ work Contract and Contagion we find those concepts as the heart of her reading of the collapse of the time of work into that of life. For Mitropoulos, it is through the performativity and probalistic terms of “the contract” (and not simply the contract of liberal sociality, but a contract as a terms of agreement to the “right” genealogical transfer of wealth) that we should visualize the flights of capital. This broadened notion of the contract is a necessary term for fully grasping what is being brought into being on the heels of “the datalogical turn.”
For Mitropoulos, it is the contract, which she links to the oath, the promise, the covenant, the bargain, and even faith in general, that “transforms contingency into necessity.” Contracts’ “ensuing contractualism” has been “amplified as an ontological precept.” Here, contract is fundamentally a precept that transforms life into a game (and I don’t mean simply game-ifyed, but obviously we could talk about what gameification means for our sense of what is implied in contractual relations. Liberal contracts have tended to evoke their authority from the notion of autonomous and rational subjects—this is not exactly the same subject being invoked when you’re prompted to like every picture of a cat on the internet or have your attention directed to tiny little numbers in the corner of screen to see who faved your post, although those Facebook numbers are micro-contracts. One’s you haven’t signed up for exactly.) For Mitropoulos, it is not just that contracts transform life into contingency; it is that they transform life into a game that must be played out of necessity. Taking up Pascal’s wager Mitropoulos writes,
the materiality of contractualism is that of a performativity installed by its presumption of the inexorable necessity of contingency; a presumption established by what I refer to here as the Pascalian premise that one must ‘play the game’ necessarily, that this is the only game available. This invalidates all idealist explanations of contract, including those which echo contractualism’s voluntarism in their understanding of (revolutionary) subjectivity. Performativity is the temporality of contract, and the temporal continuity of capitalism is uncertain.
In other words, one has no choice but to gamble. God either exists or God does not exist. Both may be possible/virtual, but only one will be real/actual and it is via the wager that one must, out of necessity, come to understand God with and through contingency. It is through such wagering that the contract—as a form of measurable risk—comes into being. Measurable risk—measure and risk as entangled in speculation— became, we might say, the Via Affirmativa of early and industrializing capital.
This transmutation of contingency into measure sits not only at the heart the contract, but is as Mitropoulos writes, “crucial to the legitimatized forms of subjectivity and relation that have accompanied the rise and expansion of capitalism across the world.” Yet, in addition to the historical project of situating an authorial, egalitarian, liberal, willful, and autonomous subject as a universal subject, contract is also interested in something that looks much more like geometric, matrixial, spatializing, and impersonal. Contract does not solely care about “subject formation”, but also the development of positions that compose a matrix— so that the matrix is made to be an engine of production and circulation. It is interested in the creation of an infrastructure of contracts, or points of contact that reconfigure a “divine” order in the face of contingency.
The production of such a divine order is what Mitropolous will link back to Oikonomia or the economics of the household, whereby bodies are parsed both spatially and socially into those who may enter into contract and those who may not. While contract becomes increasingly a narrow domain of human relations, Oikonomia is the intentional distribution and classification of bodies—humans, animal, mineral— to ensure the “proper” (i.e. moral, economic, and political) functioning of the household, which functions like molar node within the larger matrix. Given that contingency has been installed as the game that must be played, contract then comes to enforces a chain of being predicated on forms of naturalized servitude and obligation to the game. These are forms of naturalized servitude that are simultaneously built into the architecture of the household, as well as made invisible. As Anne Boyer has written in regard to the Greek household it, probably looked like this:
In the front of the household were the women’s rooms—the gynaikonitis. Behind these were the common areas and the living quarters for the men—the andronitis. It was there one could find the libraries. The men’s area, along with the household, was also wherever was outside of the household—that is, the free man’s area was the oikos and the polis and was the world. The oikos was always at least a double space, and doubly perceived, just as what is outside of it was always a singular territory on which slaves and women trespassed. The singular nature of the outside was enforced by violence or the threat of it. The free men’s home was the women’s factory; also—for women and slaves—their factory was a home on its knees.
This is not simply a division of labor, but as Boyer writes, “God made of women an indoor body, and made of men an outdoor one. And this scheme—what becomes, in future iterations, public and private, of production and reproduction, of waged work and unpaid servitude—is the order agreed upon to attend to the risk posed by those who make the oikos.”
This is the order that we believe has given way as Fordism morphed into Post-Fordism and as the walls of these architectures have been smoothed by the flows of endlessly circulated, derivative, financialized capital. Yet, what Mitropoulos’ work points us toward is the persistence of the contract. Walls may crumble, but the foundations of contract re-instantiate, if not proliferate, in the wake of capital’s discovery of new terrains. The gynaikonitis with its function to parse and delineate the labor of the household into a hierarchy of care work—from the wifely householding of management to the slave-like labor of “being ready to hand”— does not simply evaporate, but rather finds new instantiations among the flights of capital and new instantiations within its very infrastructure. Following Mitropoulos, we can argue that while certain forms of disciplinary seemingly come to an end, there is no shift to control without a proliferating matrix of contract whose function is to re-impose the very meaning—or rather, the very ontological necessity, of measure. It is through the persistent re-imposition of measure that a logic of the Oikos is never lost, ensuring—despite new configurations of capital—the genealogical transfer of wealth and the fundamentally dispossessing relations of servitude.
Let me shift a gear here ever so slightly and enter Alicia Florrick. Alicia is “The Good Wife”, who many of you know from the TV show of the same name. She is the white fantasy super-hero and upper middle class working mother and ruthless lawyer who has successfully exploded onto the job market after years of raising her children and who is not only capable of leaning in after all those years, but of taking command of her own law firm and running for political office. Alicia is a “good wife” not solely because she has stood beside her philandering politician husband, but because as a white, upper-class mother and lawyer, she is nonetheless responsible for the utmost of feminized and invisible labor—that of (re)producing the very conditions of sociality. Her “womanly” or “wife-ish” goodness is predicated on her ability to transform what are essentially, in the show, a series of shitty experiences and shitty conditions, into conditions of possibility and potential. Alicia works endlessly, tirelessly (Does she ever sleep?) to find new avenues of possibility and configurations of the law in order to create a very specific form of “liberal” order and organization, believing as she does in the “power of rules” (in distinction to her religious daughter, a necessary trope used to highlight the fundamentally “moral” underpinning of secular order.)
While the show is incredibly popular, no doubt because viewers desire to identify with Alicia’s capacity for labor and domination, to me the show is less about a real or even possible human figure than it is about a “good wife” and the social function that such a wife plays. In Oikonomic logic, a good wife is essential to the maintenance of contract because she is what metabolizes the worlds of inner and outer, simultaneously managing the inner domestic world of care within while parsing or keeping distinct its contagion from the outer world of contract. That Alicia is white, heternormative, upper middle class, as well as upwardly mobile and legally powerful is essential to aligning her with the power of contract, yet her work is fundamentally that of parsing contagions to the system. Prison bodies and prison as a site of the “general population” haunt the show as though we are meant to forget that Alicia’s labor and its value are predicated on the existence of space beyond contract—a space of being removed from visibility. The figure of the good wife therefore not only operates as a shared boundary, but reproduces the distinctions between contractable relations and invisible, obligated labor or what I will call metabolization. Our increasing digitized, datafied, networked, and surveilled world is fully populated by such good wives. We call them interfaces. But they should also be seen as a proliferation of contracts, which are rewriting the nature of who and what may participate.
I would like to argue that good wives—or interfaces—and their necessary shadow world of obligated labor are useful frameworks for understanding the paradox I mentioned when I first began: how and why has digital abundance been ushered on the heels of massive income inequality and political dispossession? In the logic of the Oikos, the good wife of the interface stands in both contradistinction and harmony with the metabolizing labor of the system she manages, which is comprised of those specifically removed from “the labor” relation— domestic workers, care workers, prisoner laborers—those who must be “present” yet without recognition. The interface stands in both contradistinction and harmony with the algorithm that is made to be present and made to adapt. I want to argue that the “marriage” of the proliferation of interfaces and with the ubiquitous, and adaptive computation of digital algorithms is an Oikonomic infrastructure. It is a proliferation of contracts meant to insure that the “contagion” of the algorithm, which I explore in a moment, remain “black boxed” or removed from visibility, while nonetheless ensuring that such contagious invisible work shore up the power of contract and its ability to redirect capital along genealogical lines. While Piketty doesn’t uses the language of the Oikos, we might read the arrival of his work as a confirmation that we are in a moment re-establishing such a “household logic”—an expansion of capital that comes with quite a new foundation of the transfer of wealth.
While the good wife or interface is a boundary, which borrowing from Celia Lury, that marks a frame for the simultaneous capture and redeployment of data, it is the digital algorithm that undergirds or makes possible the interfaces’ ontological authority to “measure.” However, algorithms, if we follow Luciana Parisi are not simple executing a string of code, not simply providing the interface with a “measure” of an existing world. Rather, algorithms are, as Luciana Parisi writes in her work on contagious architecture, performing entities that are “not simply representations of data, but are occasions of experience insofar as they prehend information in their own way.” Here Parisi is ascribing to the algorithm a Whiteheadian ontology of process, which sees the algorithm as its own spatio-temporal entity capable of grasping, including, or excluding data. Prehension implies not so much a choice, but a relation of allure by which all entities (not only algorithms) call one another into being, or come into being as events or what Whitehead calls “occasions of experience.” For Parisi, via Whitehead, the algorithm is no longer simply a tool to accomplish a task, but an “actuality, defined by an automated prehension of data in the computational processing of probability.”
Much like the good wife of the Greek household, who must manage and organize—but is nonetheless dependent on— the contagious (and therefore made to be invisible) domestic labor of servants and slave, the good wife of the interface manages and organizes the prehensive capacities of the algorithm, which are then misrecognized as simply “doing their job” or executing their code in a divine order of being. However, if we follow Parisi, prehension does not simply imply the direct “reproduction of that which is prehended”, rather prehension should be understood itself be understood as a “contagion.” Writing, “infinite amounts of data irreversibly enter and determine the function of algorithmic procedures. It follows that contagion describes the immanence of randomness in programming.” This contagion, for Parisi, means that “algorithmic prehensions are quantifications of infinite qualities that produce new qualities.” Rather than simply “doing their job”, as it were, algorithms are fundamentally generative. They are, for Parisi, producing not only new digital spaces, but also programmed architectural forms and urban infrastructures that “expose us to new mode of living, but new modes of thinking.” Algorithms are metabolizing a world of infinite and incomputable data that is then mistaken by the interfaces as a “measure” of that world—a measure that can not only stand in for contract, but can give rise to a proliferation of micro contracts that populate the circulations of sociality.
Control then, if we can return to that idea, has come not simply about as an undulation or a demise of discipline, but through an architecture of metabolization and measure that has never disavowed the function of contract. It is, in fact, an architecture quite successful at re-writing the very terms of contract arrangements. Algorithmic architectures may no longer seek to maintain the walls of the household, but they are nonetheless in the rapid production of an Oikos all the same.
_____
Karen Gregory (@claudiakincaid) is the Title V Lecturer in Sociology in the Department of Interdisciplinary Arts and Sciences/Center for Worker Education at the City College of New York, where she is also the faculty head of City Lab. Her work explores the intersection of digital labor, affect, and contemporary spirituality, with an emphasis on the role of the laboring body. Karen is a founding member of CUNY Graduate Center’s Digital Labor Working Group and her writings have appeared in Women’s Studies Quarterly, Women and Performance, Visual Studies, Contexts, The New Inquiry, and Dis Magazine.
Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)
So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.
Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.
I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.
And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.
Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.
I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.
I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.
The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.
Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.
Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.
I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.
I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.
There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.
Perhaps, yes.
But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.
Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.
By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.
And that right there is already a process of erasure, a different sort of mansplaining one might say.
Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.
Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)
“Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)
Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.
A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).
In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.
Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.
This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)
Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”
This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.
So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?
And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.
That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)
Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)
One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.
What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,
Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.
How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?
Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.
You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.
This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”
Case provides some examples of templated selves:
Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.
As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?
While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.
Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?
It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?
The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.
The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.
The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”
And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.
One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.
That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.
Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.
Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
_____
Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book calledTeaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.
Of all the dangers looming over humanity no threat is greater than that posed by the Luddites.
If the previous sentence seems absurdly hyperbolic, know that it only seems that way because it is, in fact, quite ludicrous. It has been over two hundred years since the historic Luddites rose up against “machinery hurtful to commonality,” but as their leader the myth enrobed General Ludd was never apprehended there are always those who fear that General Ludd is still out there, waiting with sledge hammer at the ready. True, there have been some activist attempts to revive the spirit of the Luddites (such as the neo-Luddites of the late 1980s and 1990s) – but in the midst of a society enthralled by (and in thrall to) smart phones, start-ups, and large tech companies – to see Luddites lurking in every shadow is a sign of either ideology, paranoia, or both.
Yet, such an amusing mixture of unabashed pro-technology ideology and anxiety at the possibility of any criticism of technology is on full display in the inaugural “Luddite Awards” presented by The Information Technology and Innovation Foundation (ITIF). Whereas the historic Luddites needed sturdy hammers, and other such implements, to engage in machine breaking the ITIF seems to believe that the technology of today is much more fragile – it can be smashed into nothingness simply by criticism or even skepticism. As their name suggests, the ITIF is a think tank committed to the celebration of, and advocating for, technological innovation in its many forms. Thus it should not be surprising that a group committed to technological innovation would be wary of what it perceives as a growing chorus of “neo-Ludditism” that it imagines is planning to pull the plug on innovation. Therefore the ITIF has seen fit to present dishonorable “Luddite Awards” to groups it has deemed insufficiently enamored with innovation, these groups include (amongst others): The Vermont Legislature, The French Government, the organization Free Press, the National Rifle Association, and the Electronic Frontier Foundation. The ITIF “Luddite Awards” may mark the first time that any group has accused the Electronic Frontier Foundation of being a secret harbor for neo-Ludditism.
Unknown artist, “The Leader of the Luddites,” engraving, 1812 (image source: Wikipedia)
The full report on “The 2014 ITIF Luddite Awards,” written by the ITIF’s president Robert D. Atkinson, presents the current state of technological innovation as being dangerously precarious. Though technological innovation is currently supplying people with all manner of devices, the ITIF warns against a growing movement born of neo-Ludditism that will aim to put a stop to further innovation. Today’s neo-Ludditism, in the estimation of the ITIF is distinct from the historic Luddites, and yet the goal of “ideological Ludditism” is still “to ‘smash’ today’s technology.” Granted, adherents of neo-Ludditism are not raiding factories with hammers, instead they are to be found teaching at universities, writing columns in major newspapers, disparaging technology in the media, and otherwise attempting to block the forward movement of progress. According to the ITIF (note the word “all”):
“what is behind all ideological Ludditism is the general longing for a simpler life from the past—a life with fewer electronics, chemicals, molecules, machines, etc.” (ITIF, 3)
Though the chorus of Ludditisim has, in the ITIF’s reckoning, grown to an unacceptable volume of late, the foundation is quick to emphasize that Ludditism is nothing new. What is new, as the ITIF puts it, is that these nefarious Luddite views have, apparently, moved from the margins and infected the larger public discourse around technology. A diverse array of figures and groups from figures like environmentalist Bill McKibben, conservative thinker James Pethokoukis, economist Paul Krugman, writers for Smithsonian Magazine, to foundations like Free Press, the EFF and the NRA – are all tarred with the epithet “Luddite.”The neo-Luddites, according to ITIF, issue warnings against unmitigated acceptance of innovation when they bring up environmental concerns, mention the possibility of jobs being displaced by technology, write somewhat approvingly of the historic Luddites, or advocate for Net Neutrality.
While the ITIF holds to the popular, if historically inaccurate, definition of Luddite as “one who resists technological change,” their awards make clear that the ITIF would like to add to this definition the words “or even mildly opposes any technological innovation.” The ten groups awarded “Luddite Awards” are a mixture of non-profit public advocacy organizations and various governments – though the ITIF report seems to revel in attacking Bill McKibben he was not deemed worthy of an award (maybe next year). The awardees include: the NRA for opposing smart guns, The Vermont legislature for requiring the labeling of GMOS, Free Press’s support of net neutrality which is deemed as an affront to “smarter broadband networks,” news reports which “claim that ‘robots are killing jobs,” the EFF is cited as it “opposes Health IT,” and various governments in several states are reprimanded for “cracking down” on companies like Airbnb, Uber and Lyft. The ten recipients of Luddite awards may be quite surprised to find that they have been deemed adherents of neo-Ludditism, but in the view of the ITIF the actions these groups have taken indicate that General Ludd is slyly guiding their moves. Though the Luddite Awards may have a somewhat silly feeling, the ITIF cautions that the threat is serious, as the report ominously concludes:
“But while we can’t stop the Luddites from engaging in their anti-progress, anti-innovation activities, we can recognize them for what they are: actions and ideas that are profoundly anti-progress, that if followed would mean a our children [sic] will live lives as adults nowhere near as good as the lives they could live if we instead embraced, rather than fought innovation.” (ITIF, 19)
Credit is due to the ITIF for their ideological consistency. In putting together their list of recipients for the inaugural “Luddite Awards” – the foundation demonstrates that they are fully committed to technological innovation and they are unflagging in their support of that cause. Nevertheless, while the awards (and in particular the report accompanying the awards) may be internally ideologically consistent it is also a work of dubious historical scholarship, comical neoliberal paranoia, and evinces a profound anti-democratic tendency. Though the ITIF awards aim to target what it perceives as “neo-Ludditism” even a cursory glance at their awardees makes it abundantly clear that what the organization actually opposes is any attempt to regulate technology undertaken by a government, or advocated for by a public interest group. Even in a country as regulation averse as the contemporary United States it is still safer to defame Luddites than to simply state that you reject regulation. The ITIF carefully cloaks its ideology in the aura of terms with positive connotations such as “innovation,” “progress,” and “freedom” but these terms are only so much fresh paint over the same “free market” ideology that only values innovation, progress and freedom when they are in the service of neoliberal economic policies. Nowhere does the ITIF engage seriously with the questions of “who profits from this innovation?” “who benefits from this progress?” “is this ‘freedom’ equally distributed or does it reinforce existing inequities?” – the terms are used as ideological sledgehammers far blunter than any tool the Luddites ever used. This raw ideology is on perfect display in the very opening line of the award announcement, which reads:
“Technological innovation is the wellspring of human progress, bringing higher standards of living, improved health, a cleaner environment, increased access to information and many other benefits.” (ITIF, 1)
One can only applaud the ITIF for so clearly laying out their ideology at the outset, and one can only raise a skeptical eyebrow at this obvious case of the logical fallacy of Begging the Question. To claim that “technological innovation is the wellspring of human progress” is an assumption that demands proof, it is not a conclusion in and of itself. While arguments can certainly be made to support this assumption there is little in the ITIF report that suggests the ITIF is willing to engage in the type of critical reflection, which would be necessary for successfully supporting this argument (though, to be fair, the ITIF has published many other reports some of which may better lay out this claim). The further conclusions that such innovation brings “higher standards of living, improved health, a cleaner environment” and so forth are further assumptions that require proof – and in the process of demonstrating this proof one is forced (if engaging in honest argumentation) to recognize the validity of competing claims. Particularly as many of the “benefits” ITIF seeks to celebrate do not accrue evenly. True, an argument can be made that technological innovation has an important role to play in ushering in a “cleaner environment” – but tell that to somebody who lives next to an e-waste dump where mountains of the now obsolete detritus of “technological innovation” leach toxins into the soil. The ITIF report is filled with such pleasant sounding “common sense” technological assumptions that have been, at the very least, rendered highly problematic by serious works of inquiry and scholarship in the field of the history of technology. As classic works in the scholarly literature of the Science and Technology Studies field, such as Ruth Schwartz Cowan’s More Work for Mother, make clear “technological innovation” does not always live up to its claims. Granted, it is easy to imagine that the ITIF would offer a retort that simply dismisses all such scholarship as tainted by neo-Ludditism. Yet recognizing that not all “innovation” is a pure blessing does not represent a rejection of “innovation” as such – it just recognize that “innovation” is only one amongst many competing values a society must try to balance.
Instead of engaging with critics of “technological innovation” in good faith, the ITIF jumps from one logical fallacy to another, trading circular reasoning for attacking the advocate. The author of the ITIF report seems to delight in pillorying Bill McKibben but also aims its barbs at scholars like David Noble and Neil Postman for exposing impressionable college aged minds to their “neo-Luddite” biases. That the ITIF seems unconcerned with business schools, start-up culture, and a “culture industry” that inculcates an adoration for “technological innovation” to the same “impressionable minds” is, obviously, not commented upon. However, if a foundation is attempting to argue that universities are currently a hotbed of “neo-Ludditism” than it is questionable why the ITIF should choose to signal out two professors for special invective who are both deceased – Postman died in 2003 and David Noble died in 2010.
It almost seems as if the ITIF report cites serious humanistic critics of “technological innovation” as a way to make it seem as though it has actually wrestled with the thought of such individuals. After all, the ITIF report deigns to mention two of the most prominent thinkers in the theoretical legacy of the critique of technology, Lewis Mumford and Jacques Ellul, but it only mentions them in order to dismiss them out of hand. The irony, naturally, is that thinkers like Mumford and Ellul (to say nothing of Postman and Noble) would have not been surprised in the least by the ITIF report as their critiques of technology also included a recognition of the ways that the dominant forces in technological society (be it in the form of Ellul’s “Technique” or Mumford’s “megamachine”) depended upon the ideological fealty of those who saw their own best interests as aligning with that of the new technological regimes of power. Indeed, the ideological celebrants of technology have become a sort of new priesthood for the religion of technology, though as Mumford quipped in Art and Technics:
“If you fall in love with a machine there is something wrong with your love-life. If you worship a machine there is something wrong with your religion.” (Art and Technics, 81)
Trade out the word “machine” in the above quotation with “technological innovation” and it applies perfectly to the ITIF awards document. And yet, playful gibes aside, there are many more (many, many more) barbs that one can imagine Mumford directing at the ITIF. As Mumford wrote in The Pentagon of Power:
“Consistently the agents of the megamachine act as if their only responsibility were to the power system itself. The interests and demands of the populations subjected to the megamachine are not only unheeded but deliberately flouted.” (The Pentagon of Power, 271)
The ITIF “Luddite Awards” are a pure demonstration of this deliberate flouting of “the interests and demands of the populations” who find themselves always on the receiving end of “technological innovation.” For the ITIF report shows an almost startling disregard for the concerns of “everyday people” and though the ITIF is a proudly nonpartisan organization the report demonstrates a disturbingly anti-democratic tendency. That the group does not lean heavily toward Democrats or Republicans only demonstrates the degree to which both parties eat from the same neoliberal trough – routinely filled with fresh ideological slop by think tanks like ITIF. Groups that advocate in the interest of their supporters in the public sphere (such as Free Press, the EFF, and the NRA {yes, even them}) are treated as interlopers worthy of mockery for having the audacity to raise concerns; similarly elected governmental bodies are berated for daring to pass timid regulations. The shape of the “ideal society” that one detects in the ITIF report is one wherein “technological innovation” knows no limits, and encounters no opposition, even if these limits are relatively weak regulations or simply citizens daring to voice a contrary opinion – consequences be damned! On the high-speed societal train of “technological innovation” the ITIF confuses a few groups asking for a slight reduction of speed with groups threatening to derail the train.
Thus the key problem of the ITIF “Luddite Awards” emerges – and it is not simply that the ITIF continues to use Luddite as an epithet – it is that the ITIF seems willfully ignorant of any ethical imperatives other than a broadly defined love of “technological innovation.” In handing out “Luddite Awards” the ITIF reveals that it recognizes “technological innovation” as the crowning example of “the good.” It is not simply one “good” amongst many that must carefully compromise with other values (such as privacy, environmental concerns, labor issues, and so forth), rather it is the definitive and ultimate case of “the good.” This is not to claim that “technological innovation” is not amongst values that represent “the good,” but it is not the only value – treating it as such lead to confusing (to borrow a formulation from Lewis Mumford) “the goods life with the good life.” By fully privileging “technological innovation” the ITIF treats other values and ethical claims as if they are to be discarded – the philosopher Hans Jonas’s The Imperative of Responsibility (which advocated for a cautious approach to technological innovation that emphasized the potential risks inherent in new technologies) is therefore tossed out the window to be replaced by “the imperative of innovation” along with a stack of business books and perhaps an Ayn Rand novel, or two, for good measure.
Indeed, responsibility for the negative impacts of innovation is shrugged off in the ITIF awards, even as many of the awardees (such as the various governments) wrestle with the responsibility that tech companies seem to so happily flaunt. The disrupters hate being disrupted. Furthermore, as should come as no surprise, the ITIF report maintains an aura that smells strongly of colonialism and disregard for the difficulties faced by those who are “disrupted” by “technological innovation.” The ITIF may want to reprimand organizations for trying to gently slow (which is not the same as stopping) certain forms of “technological innovation,” but the report has nothing to say about those who work mining the coltan that powers so many innovative devices, no concern for the factory workers who assemble these devices, and – of course – nothing to say about e-waste. Evidently to think such things are worthy of concern, to even raise the issue of consequences, is a sign of Ludditism. The ITIF holds out the promise of “better days ahead” and shows no concern for those whose lives must be trampled upon in the process. Granted, it is easy to ignore such issues when you work for a think tank in Washington DC and not as a coltan miner, a device assembler, a resident near an e-waste dump, or an individual whose job has just been automated.
The ITIF “Luddite Awards” are yet another installment of the tech world/business press game of “Who’s Afraid of General Ludd” in which the group shouting the word “Luddite” at all opponents reveals that it has a less nuanced understanding of technology than was had by the historic Luddites. After all, the Luddites were not opposed to technology as such, nor were they opposed to “technological innovation,” rather, as E.P. Thompson describes in The Making of the English Working Class:
“What was at issue was the ‘freedom’ of the capitalist to destroy the customs of the trade, whether by new machinery, by the factory-system, or by unrestricted competition, beating-down wages, undercutting his rivals, and undermining standards of craftsmanship…They saw laissez faire, not as freedom but as ‘foul Imposition”. They could see no ‘natural law’ by which one man, or a few men, could engage in practices which brought manifest injury to their fellows.” (Thompson, 548)
What is at issue in the “Luddite Awards” is the “freedom” of “technological innovators” (the same-old “capitalists”) to force their priorities upon everybody else – and while the ITIF may want to applaud such “freedom” it is clear that they do not intend to extend such freedom to the rest of the population. The fear that can be detected in the ITIF “Luddite Awards” is not ultimately directed at the award recipients, but at an aspect of the historic Luddites that the report seems keen on forgetting: namely, that the Luddites organized a mass movement that enjoyed incredible popular support – which was why it was ultimately the military (not “seeing the light” of “technological innovation”) that was required to bring the Luddite uprisings to a halt. While it is questionable whether many of the recipients of “Luddite Awards” will view the award as an honor, the term “Luddite” can only be seen as a fantastic compliment when it is used as a synonym for a person (or group) that dares to be concerned with ethical and democratic values other than a simple fanatical allegiance to “technological innovation.” Indeed, what the ITIF “Luddite Awards” demonstrate is the continuing veracity of the philosopher Günther Anders statement, in the second volume of The Obsolescence of Man, that:
“In this situation, it is no use to brandish scornful words like ‘Luddites’. If there is anything that deserves scorn it is, to the contrary, today’s scornful use of the term, ‘Luddite’ since this scorn…is currently more obsolete than the allegedly obsolete Luddism.” (Anders, Introduction – Section 7)
After all, as Anders might have reminded the people at ITIF: gas chambers, depleted uranium shells, and nuclear weapons are also “technological innovations.”
Works Cited
Anders, Günther. The Obsolescence of Man: Volume II – On the Destruction of Life in the Epoch of the Third Industrial Revolution. (translated by Josep Monter Pérez, Pre-Textos, Valencia, 2011). Available online: here.
Mumford, Lewis. The Myth of the Machine, volume 2 – The Pentagon of Power. New York: Harvest/Harcourt Brace Jovanovich, 1970.
Mumford, Lewis. Art and Technics. New York: Columbia University Press, 2000.
Thompson, E.P. The Making of the English Working Class. New York: Vintage Books, 1966.
Not cited but worth a look – Eric Hobsbawm’s classic article “The Machine Breakers.”
_____
Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian,” Loeb writes at the blog LibrarianShipwreck, where this post first appeared. He is a frequent contributor to The b2 Review Digital Studies section.