boundary 2

Tag: games

  • Coding Bootcamps and the New For-Profit Higher Ed

    Coding Bootcamps and the New For-Profit Higher Ed

    By Audrey Watters
    ~
    After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

    In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

    Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

    But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

    School as “Skills Training”

    In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

    I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

    But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

    There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

    Nor is the promotion of a more business-focused education that new either.

    Image credits

    Career Colleges: A History

    Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

    The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

    Image credits

    The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

    That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

    It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

    Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

    It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

    Image credits

    Promises, Promises

    Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

    That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

    The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.

    Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

    Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

    Image credits

    According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

    For-Profit Higher Ed: Who’s Being Served?

    The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

    The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

    According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

    That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

    Image credits

    The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

    Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

    Age
    Mean Age 30.95
    Gender
    Female 36.3%
    Male 63.1%
    Ethnicity
    American Indian 1.0%
    Asian American 14.0%
    Black 5.0%
    Other 17.2%
    White 62.8%
    Hispanic Origin
    Yes 20.3%
    No 79.7%
    Citizenship
    Yes, born in the US 78.2%
    Yes, naturalized 9.7%
    No 12.2%
    Education
    High school dropout 0.2%
    High school graduate 2.6%
    Some college 14.2%
    Associate’s degree 4.1%
    Bachelor’s degree 62.1%
    Master’s degree 14.2%
    Professional degree 1.5%
    Doctorate degree 1.1%

    (According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

    It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

    Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

    Deming, Goldin, and Katz argue that

    The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

    Image credits

    According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

    For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

    What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

    EQUIP and the New For-Profit Higher Ed

    On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

    The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

    By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

    Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

    Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

    Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

    And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

    Image credits

    The Forgotten Tech Ed: Community Colleges

    Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

    Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

    Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

    This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Men (Still) Explain Technology to Me: Gender and Education Technology

    Men (Still) Explain Technology to Me: Gender and Education Technology

    By Audrey Watters
    ~

    Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)

    Men Explain Technology

    So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.

    Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.

    I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.

    And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.

    Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.

    I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.

    I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.

    The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.

    Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.

    Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.

    I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.

    I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.

    There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.

    Perhaps, yes.

    But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.

    Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.

    By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.

    And that right there is already a process of erasure, a different sort of mansplaining one might say.

    Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.

    Ada Lovelace
    Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)

    “Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)

    Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.

    A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).

    In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.

    Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.

    This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)

    • Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
    • 70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
    • 70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
    • 69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
    • 70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
    • Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
    • And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”

    This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.

    So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?

    And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.

    That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)

    Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)

    One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.

    What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,

    Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.

    How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?

    Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.

    You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.

    This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”

    Case provides some examples of templated selves:

    Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.

    As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?

    While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.

    Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?

    It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?

    The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.

    The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.

    The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

    And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.

    One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.

    That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.

    Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.

    Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • What Drives Automation?

    What Drives Automation?

    glass-cagea review of Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
    by Mike Bulajewski
    ~

    Debates about digital technology are often presented in terms of stark polar opposites: on one side, cyber-utopians who champion the new and the cutting edge, and on the other, cyber-skeptics who hold on to obsolete technology. The framing is one-dimensional in the general sense that it is superficial, but also in a more precise and mathematical sense that it implicitly treats the development of technology as linear. Relative to the present, there are only two possible positions and two possible directions to move; one can be either for or against, ahead or behind.[1]

    Although often invoked as a prelude to transcending the division and offering a balanced assessment, in describing the dispute in these pro or con terms one has already betrayed one’s orientation, tilting the field against the critical voice by assigning it an untenable position. Criticizing a new technology is misconstrued as a simple defense of the old technology or of no technology, which turns legitimate criticism into mere conservative fustiness, a refusal to adapt and a failure to accept change.

    Few critics of technology match these descriptions, and those who do, like the anarcho-primitivists who claim to be horrified by contemporary technology, nonetheless accede to the basic framework set by technological apologists. The two sides disagree only on the preferred direction of travel, making this brand of criticism more pro-technology than it first appears. One should not forget that the high-tech futurism of Silicon Valley is supplemented by the balancing counterweight of countercultural primitivism, with Burning Man expeditions, technology-free Waldorf schools for children of tech workers, spouses who embrace premodern New Age beliefs, romantic agrarianism, and restorative digital detox retreats featuring yoga and meditation. The diametric opposition between pro- and anti-technology is internal to the technology industry, perhaps a symptom of the repression of genuine debate about the merits of its products.

    ***

    Nicholas Carr’s most recent book, The Glass Cage: Automation and Us, is a critique of the use of automation and a warning of its human consequences, but to conclude, as some reviewers have, that he is against automation or against technology as such is to fall prey to this one-dimensional fallacy.[2]

    The book considers the use of automation in areas like medicine, architecture, finance, manufacturing and law, but it begins with an example that’s closer to home for most of us: driving a car. Transportation and wayfinding are minor themes throughout the book, and with Google and large automobile manufacturers promising to put self-driving cars on the street within a decade, the impact of automation in this area may soon be felt in our daily lives like never before. Early in the book, we are introduced to problems that human factors engineers working with airline autopilot systems have discovered and may be forewarnings of a future of the unchecked automating of transportation.

    Carr discusses automation bias—the tendency for operators to assume the system is correct and external signals that contradict it are wrong—and the closely related problem of automation complacency, which occurs when operators assume the system is infallible and so abandon their supervisory role. These problems have been linked to major air disasters and are behind less-catastrophic events like oblivious drivers blindly following their navigation systems into nearby lakes or down flights of stairs.

    The chapter dedicated to deskilling is certain to raise the ire of skeptical readers because it begins with an account of the negative impact of GPS technology on Inuit hunters who live in the remote northern reaches of Canada. As GPS devices proliferated, hunters lost what a tribal elder describes as “the wisdom and knowledge of the Inuit”: premodern wayfinding methods that rely on natural phenomena like wind, stars, tides and snowdrifts to navigate. Inuit wayfinding skills are truly impressive. The anthropologist Claudio Aporta reports traveling with a hunter across twenty square kilometers of flat featureless land as he located seven fox traps that he had never seen before, set by his uncle twenty five years prior. These talents have been eroded as Inuit hunters have adopted GPS devices that seem to do the job equally well, but have the unexpected side effect of increasing injuries and deaths as hunters succumb to equipment malfunctions and the twin perils of automation complacency and bias.

    Laboring under the misconceptions of the one-dimensional fallacy, it would be natural to take this as a smoking gun of Carr’s alleged anti-technology perspective and privileging of the premodern, but the closing sentences of the chapter point us away from that conclusion:

    We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us. (151)

    These words segue into the following chapter, where Carr identifies the dominant philosophy that designs automation technologies to inadvertently produce problems that he identified earlier: technology-centered automation. This approach to design is distrustful of humans, perhaps even misanthropic. It views us as weak, inefficient, unreliable and error-prone, and seeks to minimize our involvement in the work to be done. It institutes a division of labor between human and machine that gives the bulk of the work over to the machine, only seeking human input in anomalous situations. This philosophy is behind modern autopilot systems that hand off control to human pilots for only a few minutes in a flight.

    The fundamental argument of the book is that this design philosophy can lead to undesirable consequences. Carr seeks an alternative he calls human-centered automation, an approach that ensures the human operator remains engaged and alert. Autopilot systems designed with this philosophy might return manual control to the pilots at irregular intervals to ensure they remain vigilant and practice their flying skills. It could provide tactile feedback of its operations so that pilots are involved in a visceral way rather than passively monitoring screens. Decision support systems like those used in healthcare could take a secondary role of reviewing and critiquing a decision made by a doctor made rather than the other way around.

    The Glass Cage calls for a fundamental shift in how we understand error. Under the current regime, an error is an inefficiency or an inconvenience, to be avoided at all costs. As defined by Carr, a human-centered approach to design treats error differently, viewing it as an opportunity for learning. He illustrates this with a personal experience of repeatedly failing a difficult mission in the video game Red Dead Redemption, and points to the satisfaction of finally winning a difficult game as an example of what is lost when technology is designed to be too easy. He offers video games as a model for the kinds of technologies he would like to see: tools that engage us in difficult challenges, that encourage us to develop expertise and to experience flow states.

    But Carr has an idiosyncratic definition of human-centered design which becomes apparent when he counterposes his position against the prominent design consultant Peter Merholz. Echoing premises almost universally adopted by designers, Merholz calls for simple, frictionless interfaces and devices that don’t require a great deal of skill, memorization or effort to operate. Carr objects that that eliminates learning, skill building and mental engagement—perhaps a valid criticism, but it’s strange to suggest that this reflects a misanthropic technology-centered approach.

    A frequently invoked maxim of human-centered design is that technology should adapt to people, rather than people adapting to technology. In practice, the primary consideration is helping the user achieve his or her goal as efficiently and effectively as possible, removing unnecessary obstacles and delays that stand in the way. Carr argues for the value of challenges, difficulties and demands placed on users to learn and hone skills, all of which fall under the prohibited category of people adapting to technology.

    In his example of playing Red Dead Redemption, Carr prizes the repeated failure and frustration before finally succeeding at the game. From the lens of human-centered design, that kind of experience is seen as a very serious problem that should be eliminated quickly, which is probably why this kind of design is rarely employed at game studios. In fact, it doesn’t really make sense to think of a game player as having a goal, at least not from the traditional human-centered standpoint. The driver of a car has a goal: to get from point A to point B; a Facebook user wants to share pictures with friends; the user of a word processor wants to write a document; and so on. As designers, we want to make these tasks easy, efficient and frictionless. The most obvious way of framing game play is to say that the player’s goal is to complete the game. We would then proceed to remove all obstacles, frustrations, challenges and opportunities for error that stand in the way so that they may accomplish this goal more efficiently, and then there would be nothing left for them to do. We would have ruined the game.

    This is not necessarily the result of a misanthropic preference for technology over humanity, though it may be. It is also the likely outcome of a perfectly sincere and humanistic belief that we shouldn’t inconvenience the user with difficulties that stand in the way of their goal. As human factors researcher David Woods puts it, “The road to technology-centered systems is paved with human-centered intentions,”[3] a phrasing which suggests that these two philosophies aren’t quite as distinct as Carr would have us believe.

    Carr’s vision of human-centered design differs markedly from contemporary design practice, which stresses convenience, simplicity, efficiency for the user and ease of use. In calling for less simplicity and convenience, he is in effect critical of really existing human-centeredness, and that troubles any reading of The Glass Cage that views it a book about restoring our humanity in a world driven mad by machines.

    It might be better described as a book about restoring one conception of humanity in a world driven mad by another. It is possible to argue that the difference between the two appears in psychoanalytic theory as the difference between drive and desire. The user engages with a technology in order to achieve a goal because they perceive themselves as lacking something. Through the use of this tool, they believe they can regain it and fill in this lack. It follows that designers ought to help the user achieve their goal—to reach their desire—as quickly and efficiently as possible because this will satisfy them and make them happy.

    But the insight of psychoanalysis is that lack is ontological and irreducible, it cannot be filled in any permanent way because any concrete lack we experience is in fact metonymic for a constitutive lack of being. As a result, as desiring subjects we are caught in an endless loop of seeking out that object of desire, feeling disappointed when we find it because it didn’t fulfill our fantasies and then finding a new object to chase. The alternative is to shift from desire to drive, turning this failure into a triumph. Slavoj Žižek describes drive as follows: “the very failure to reach its goal, the repetition of this failure, the endless circulation around the object, generates a satisfaction of its own.”[4]

    This satisfaction is perhaps what Carr aims at when he celebrates the frustrations and challenges of video games and of work in general. That video games can’t be made more efficient without ruining them indicates that what players really want is for their goal to be thwarted, evoking the psychoanalytic maxim that summarizes the difference between desire and drive: from the missing/lost object, to loss itself as an object. This point is by no means tangential. Early on, Carr introduces the concept of miswanting, defined as the tendency to desire what we don’t really want and won’t make us happy—in this case, leisure and ease over work and challenge. Psychoanalysts holds that all human wanting (within the register of desire) is miswanting. Through fantasy, we imagine an illusory fullness or completeness of which actual experience always falls short.[5]

    Carr’s revaluation of challenge, effort and, ultimately, dissatisfaction cannot represent a correction of the error of miswanting­–of rediscovering the true source of pleasure and happiness in work. Instead, it radicalizes the error: we should learn to derive a kind of satisfaction from our failure to enjoy. Or, in the final chapter, as Carr says of the farmer in Robert Frost’s poem Mowing, who is hard at work and yet far from the demands of productivity: “He’s not seeking some greater truth beyond the work. The work is the truth.”

    ***

    Nicholas Carr has a track record of provoking designers to rethink their assumptions. With The Shallows, along with other authors making related arguments, he influenced software developers to create a new class of tools that cut off the internet, eliminate notifications or block social media web sites to help us concentrate. Starting with OS X Lion in 2011, Apple began offering a full screen mode that hides distracting interface elements and background windows from inactive applications.

    What transformative effects could The Glass Cage have on the way software is designed? The book certainly offers compelling reasons to question whether ease of use should always be paramount. Advocates for simplicity are rarely challenged, but they may now find themselves facing unexpected objections. Software could become more challenging and difficult to use—not in the sense of a recalcitrant WiFi router that emits incomprehensible error codes, but more like a game. Designers might draw inspiration from video games, perhaps looking to classics like the first level of Super Mario Brothers, a masterpiece of level design that teaches the fundamental rules of the game without ever requiring the player to read the manual or step through a tutorial.

    Everywhere that automation now reigns, new possibilities announce themselves. A spell checker might stop to teach spelling rules, or make a game out of letting the user take a shot at correcting mistakes it has detected. What if there was a GPS navigation device that enhanced our sense of spatial awareness rather than eroding it, that engaged our attention on to the road rather than let us tune out. Could we build an app that helps drivers maintain good their skills by challenging them to adopt safer and more fuel-efficient driving techniques?

    Carr points out that the preference for easy-to-use technologies that reduce users’ engagement is partly a consequence of economic interests and cost reduction policies that profit from the deskilling and reduction of the workforce, and these aren’t dislodged simply by pressing for new design philosophies. But to his credit, Carr has written two best-selling books aimed at the general interest reader on the fairly obscure topic of human-computer interaction. User experience designers working in the technology industry often face an uphill battle in trying to build human-centered products (however that is defined). When these matters attract public attention and debate, it makes their job a little easier.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org. He has previously written about the Spike Jonze film Her for The b2 Review Digital Studies section.

    Back to the essay

    _____

    [1] Differences between individual technologies are ignored and replaced by the monolithic master category of Technology. Jonah Lehrer’s review of Nicholas Carr’s 2010 book The Shallows in the New York Times exemplifies such thinking. Lehrer finds contradictory evidence against Carr’s argument that the internet is weakening our mental faculties in scientific studies that attribute cognitive improvements to playing video games, a non sequitur which gains meaning only by subsuming these two very different technologies under a single general category of Technology. Evgeny Morozov is one of the sharpest critics of this tendency. Here one is reminded of his retort in his article “Ghosts in the Machine” (2013): “That dentistry has been beneficial to civilization tells us very little about the wonders of data-mining.”

    [2] There are a range of possible causes for this constrictive linear geometry: a tendency to see a progressive narrative of history; a consumerist notion of agency which only allows shoppers to either upgrade or stick with what they have; or the oft-cited binary logic of digital technology. One may speculate about the influence of the popular technology marketing book by Geoffrey A. Moore, Crossing the Chasm (2014) whose titular chasm is the gap between the elite group of innovators and early adopters—the avant-garde—and the recalcitrant masses bringing up the rear who must be persuaded to sign on to their vision.

    [3] David D. Woods and David Tinapple (1999). “W3: Watching Human Factors Watch People at Work.” Proceedings of the 43rd Annual Meeting of the Human Factors and Ergonomics Society (1999).

    [4] Slavoj Žižek, The Parallax View (2006), 63.

    [5] The cultural and political implications of this shift are explored at length in Todd McGowan’s two books The End of Dissatisfaction: Jacques Lacan and the Emerging Society of Enjoyment (2003) and Enjoying What We Don’t Have: The Political Project of Psychoanalysis (2013).

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)