boundary 2

Tag: digital culture

  • Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth. (Hale 1853, ix)

    As this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor, reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of reference so that our stars can shine, since the problem of who precisely is “worthy of commemoration” so often seems to exclude women.  This essay takes on one of the “tests” used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.

    According to Wikipedia “notability,” a subject is considered notable if it  “has received significant coverage in reliable sources that are independent of the subject.” (“Wikipedia:Notability” 2017)   To a historian of women, the gender biases implicit in these criteria are immediately recognizable; for most of written history, women were de facto considered unworthy of consideration (Smith 2000). Unsurprisingly, studies have pointed to varying degrees of bias in coverage of female figures in Wikipedia compared to male figures.  One study of Encyclopedia Britannica and Wikipedia concluded,

    Overall, we find evidence of gender bias in Wikipedia coverage of biographies. While Wikipedia’s massive reach in coverage means one is more likely to find a biography of a woman there than in Britannica, evidence of gender bias surfaces from a deeper analysis of those articles each reference work misses. (Reagle and Rhue 2011)

    Five years later, another study found this bias persisted; women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that for women born prior to the 20th century, the problem of exclusion was wildly exacerbated by “sourcing and notability issues” (“Gender Bias on Wikipedia” 2017).

    One potential source for buttressing the case of notable women has been identified by literary scholar Alison Booth.  Booth identified more than 900 volumes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular (Booth 2004). Booth also points out that, lest we consign the genre to the realm of mere curiosity, the volumes were “indispensable aids in the formation of nationhood” (Booth 2004, 3).

    To reveal the historical contingency of the purportedly neutral criteria of notability, I utilized longitudinal data compiled by Booth which reveals that notability has never been the stable concept Wikipedia’s standards take it to be.  Since notability alone cannot explain which women make it into Wikipedia, I then turn to a methodology first put forth by historian Mary Ritter Beard in her critique of the Encyclopedia Britannica to identify missing entries (Beard 1977). Utilizing Notable American Women, as a reference corpus, I calculated the inclusion of individual women from those volumes in Wikipedia (Boyer and James 1971).  In this essay I extend that analysis to consider the difference between notability and notoriety from a historical perspective.  One might be well known while remaining relatively unimportant from a historical perspective.  Such distinctions are collapsed in Wikipedia, assuming that a body of writing about a historical subject stands as prima facie evidence of notability.

    While inclusion in Notable American Women does not necessarily translate into presence in Wikipedia, looking at the categories of women that have higher rates of inclusion offers insights into how female historical figures do succeed in Wikipedia.  My analysis suggests that criterion of notability restricts the women who succeed in obtaining pages in Wikipedia to those who mirror “the ‘Great Man Theory’ of history (Mattern 2015)  or are “notorious”  (Lerner 1975).

    Alison Booth has compiled a list of the most frequently mentioned women in a subset of female prosopographical volumes and tracked their frequency over time (2004, 394–396).   She made this data available on the web, allowing for the creation of Figure 1 which focuses on the inclusion of US historical figures in volumes published from 1850 to 1930.

    Figure 1. US women by publication date of books that included them (image source: author)
    Figure 1. US women by publication date of books that included them (image source: author)

    This chart clarifies what historians already know: notability is historically specific and contingent. For example, Mary Washington, mother of the first president, is notable in the nineteenth century but not in the twentieth. She drops off because over time, motherhood alone ceases to be seen as a significant contribution to history.  Wives of presidents remain quite popular, perhaps because they were at times understood as playing an important political role, so Mary Washington’s daughter-in-law Martha still appears in some volumes in the latter period. A similar pattern may be observed for foreign missionary Anne Hasseltine Judson in the twentieth century.  The novelty of female foreign missionaries like Judson faded as more women entered the field.  Other figures, like Laura Bridgman, “the first deaf-blind American child to gain a significant education in the English language,” were supplanted by later figures in what might be described as the “one and done” syndrome, where only a single spot is allotted for a specific kind of notable woman (“Laura Bridgman” 2017). In this case, Bridgman likely fell out of favor as Helen Keller’s fame rose.

    Although their notability changed over time, all the women depicted in figure 1 have Wikipedia pages; this is unsurprising as they were among the most mentioned women in the sort of volumes Wikipedia considers “reliable sources.” But what about more contemporary examples?  Does inclusion in a relatively recent work that declares women as notable mean that these women would meet Wikipedia’s notability standards? To answer this question, I relied on a methodology of calculating missing biographies in Wikipedia, utilizing a reference corpus to identify women who might reasonably be expected to appear in Wikipedia and to calculate the percentage that do not. Working with the digitized copy of Notable American Women in the Women and Social Movements database, I compiled a missing biographies quotient for individuals in selected sections of the “classified list of biographies” that appear at the end of the third volume of Notable American Women. The eleven categories with no missing entries offer some insights into how women do succeed in Wikipedia (Table 1).

    Classification % missing
    Astronomers 0
    Biologists 0
    Chemists & Physicists 0
    Heroines 0
    Illustrators 0
    Indian Captives 0
    Naturalists 0
    Psychologists 0
    Sculptors 0
    Wives of Presidents 0

    Table 1. Classifications from Notable American Women with no missing biographies in Wikipedia

    Characteristics that are highly predictive of success in Wikipedia for women include association with a powerful man, as in the wives of presidents, and recognition in a male-dominated field of science, social science and art. Additionally, extraordinary women, such as heroines, and those who are quite rare, such as Indian captives, also have a greater chance of success in Wikipedia.[1]

    Further analysis of the classifications with greater proportions of missing women reflects Gerda Lerner’s complaint that the history of notable women is the story of exceptional or deviant women (Lerner 1975).  “Social worker,” which has the highest percentage of missing biographies at 67%, illustrates that individuals associated with female-dominated endeavors are less likely to be considered notable unless they rise to a level of exceptionalism (Table 2).

    Name Included?
    Dinwiddie, Emily Wayland

    no

    Glenn, Mary Willcox Brown

    no

    Kingsbury, Susan Myra

    no

    Lothrop, Alice Louise Higgins

    no

    Pratt, Anna Beach

    no

    Regan, Agnes Gertrude

    no

    Breckinridge, Sophonisba Preston

    page

    Richmond, Mary Ellen

    page

    Smith, Zilpha Drew

    stub

    Table 2. Social Workers from Notable American Women by inclusion in Wikipedia

    Sophonisba Preston Breckinridge’s Wikipedia entry describes her as “an American activist, Progressive Era social reformer, social scientist and innovator in higher education” who was also “the first woman to earn a Ph.D. in political science and economics then the J.D. at the University of Chicago, and she was the first woman to pass the Kentucky bar” (“Sophonisba Breckinridge” 2017). While the page points out that “She led the process of creating the academic professional discipline and degree for social work,” her page is not linked to the category of American social workers (“Category:American Social Workers” 2015).  If a female historical figure isn’t as exceptional as Breckinridge, she needs to be a “first” like Mary Ellen Richmond who makes it into Wikipedia as the  “social work pioneer” (“Mary Richmond” 2017).

    This conclusion that being a “first” facilitates success in Wikipedia is supported by analysis of the classification of nurses. Of the ten nurses who have Wikipedia entries, 80% are credited with some sort of temporally marked achievement, generally a first or pioneering role (Table 3).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder?
    Delano, Jane Arminda leading pioneer World War I founder of the American Red Cross Nursing Service
    Fedde, Sister Elizabeth* established the Norwegian Relief Society
    Maxwell, Anna Caroline pioneering activities Spanish-American War
    Nutting, Mary Adelaide world’s first professor of nursing World War I founded the American Society of superintendents of Training Schools for Nurses
    Richards, Linda first professionally trained American nurse, pioneering modern nursing in the United States No Richards pioneered the founding and superintending of nursing training schools across the nation.
    Robb, Isabel Adams Hampton early leader (held many “first” positions) No helped to found …the National League for Nursing, the International Council of Nurses, and the American Nurses Association.
    Stimson, Julia Catherine first woman to attain the rank of Major World War I
    Wald, Lillian D. coined the term “public health nurse” & the founder of American community nursing No founded Henry Street Settlement
    Mahoney, Mary Eliza first African American to study and work as a professionally trained nurse in the US No co-founded the National Association of Colored Graduate Nurses
    Thoms, Adah B. Samuels World War I co-founded the National Association of Colored Graduate Nurses

    * Fredde appears in Wikipedia primarily as a Norwegian Lutheran Deaconess. The word “nurse” does not appear on her page.

    Table 3. Classifications from Notable American Women with no missing biographies in Wikipedia

    As the entries for nurses reveal, in addition to being first, a combination of several additional factors work in a female subject’s favor in achieving success in Wikipedia.  Nurses who founded an institution or organization or participated in a male-dominated event already recognized as historically significant, such as war, were more successful than those who did not.

    If distinguishing oneself, by being “first” or founding something, as part of a male-dominated event facilitates higher levels of inclusion in Wikipedia for women in female dominated fields, do these factors also explain how women from classifications that are not female-dominated succeed? Looking at labor leaders, it appears these factors can offer only a partial explanation (Table 4).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder? Description from Wikipedia
    Bagley, Sarah G. “probably the first”  No formed the Lowell Female Labor Reform Association headed up female department of newspaper until fired because “a female department. … would conflict with the opinions of the mushroom aristocracy … and beside it would not be dignified”
    Barry, Leonora Marie Kearney “only woman” “first woman” KNIGHTS OF LABOR “difficulties faced by a woman attempting to organize men in a male-dominated society.
     Employers also refused to allow her to investigate their factories.”
    Bellanca, Dorothy Jacobs  “first full-time female organizer”  No 0rganized the Baltimore buttonhole makers into Local 170 of the United Garment Workers of America, one of four women who attended founding convention of Amalgamated Clothing Workers of America   “ “men resented” her
    Haley, Margaret Angela “pioneer leader”  No  No dubbed the “lady labor slugger”
    Jones, Mary Harris  No KNIGHTS OF LABOR IWW “most dangerous woman in America”
    Nestor, Agnes  No WOMEN’S TRADE UNION LEAGUE founded  International Glove Workers Union
    O’Reilly, Leonora  No WOMEN’S TRADE UNION LEAGUE founded the Wage Earners Suffrage League “O’Reilly as a public speaker was thought to be out of place for women at this time in New York’s history.”
    O’Sullivan, Mary Kenney the first woman AFL employed WOMEN’S TRADE UNION LEAGUE founder of the Women’s Trade Union League
    Stevens, Alzina Parsons first probation officer KNIGHTS OF LABOR

    Table 4. Classifications from Notable American Women with no missing biographies in Wikipedia

    In addition to being a “first” or founding something, two other variables emerge from the analysis of labor leaders that predict success in Wikipedia.  One is quite heartening: affiliation with the Women’s Trade Union League (WTUL), a significant female-dominated historical organization, seems to translate into greater recognition as historically notable.  Less optimistically, it also appears that what Lerner labeled as “notorious” behavior predicts success: six of the nine women were included for a wide range of reasons, from speaking out publicly to advocating resistance.

    The conclusions here can be spun two ways. If we want to get women into Wikipedia, to surmount the obstacle of notability, we should write about women who fit well within the great man school of history. This could be reinforced within the architecture of Wikipedia by creating links within a woman’s entry to men and significant historical events, while also making sure that the entry emphasizes a woman’s “firsts” and her institutional ties. Following these practices will make an entry more likely to overcome challenges and provide a defense against proposed deletion.  On the other hand, these are narrow criteria for meeting notability that will likely not encompass a wide range of female figures from the past.

    The larger question remains: should we bother to work in Wikipedia at all? (Raval 2014). Wikipedia’s content is biased not only by gender, but also by race and region (“Racial Bias on Wikipedia” 2017).   A concrete example of this intersectional bias can be seen if the fact that “only nine of Haiti’s 37 first ladies have Wikipedia articles, whereas all 45 first ladies of the United States have entries” (Frisella 2017).  Critics have also pointed to the devaluation of Indigenous forms of knowledge within Wikipedia (Senier 2014; Gallart and van der Velden 2015).

    Wikipedia, billed as “the encyclopedia anyone can edit” and purporting to offer “the sum of all human knowledge,” is notorious for achieving neither goal. Wikipedia’s content suffers from systemic bias related to the unbalanced demographics of its contributor base (Wikipedia, 2004, 2009c). I have highlighted here disparities in gendered content, which parallel the well-documented gender biases against female contributors (“Wikipedia:WikiProject Countering Systemic Bias” 2017).   The average editor of Wikipedia is white, from Western Europe or the United States, between 30-40, and overwhelmingly male.   Furthermore,  “super users” contribute most of Wikipedia’s content.  A 2014 analysis revealed that  “the top 5,000 article creators on English Wikipedia have created 60% of all articles on the project.  The top 1,000 article creators account for 42% of all Wikipedia articles alone.”   A study of a small sample of these super users revealed that they are not writing about women.  “The amount of these super page creators only exacerbates the [gender] problem, as it means that the users who are mass-creating pages are probably not doing neglected topics, and this tilts our coverage disproportionately towards male-oriented topics” (Hale 2014).  For example, the “List of Pornographic Actresses” on Wikipedia is lengthier and more actively edited than the “List of Female Poets” (Kleeman 2015).

    The hostility within Wikipedia against female contributors remains a significant barrier to altering its content since the major mechanism for rectifying the lack of entries about women is to encourage women to contribute them (New York Times 2011; Peake 2015; Paling 2015).   Despite years of concerted efforts to make Wikipedia more hospitable toward women, to organize editathons, and place Wikipedians in residencies specifically designed to add women to the online encyclopedia, the results have been disappointing (MacAulay and Visser 2016; Khan 2016). Authors of a recent study of  “Wikipedia’s infrastructure and the gender gap” point to “foundational epistemologies that exclude women, in addition to other groups of knowers whose knowledge does not accord with the standards and models established through this infrastructure” which includes “hidden layers of gendering at the levels of code, policy and logics” (Wajcman and Ford 2017).

    Among these policies is the way notability is implemented to determine whether content is worthy of inclusion.  The issues I raise here are not new; Adrianne Wadewitz, an early and influential feminist Wikipedian, noted in 2013 “A lack of diversity amongst editors means that, for example, topics typically associated with femininity are underrepresented and often actively deleted”(Wadewitz 2013). Wadewitz pointed to efforts to delete articles about Kate Middleton’s wedding gown, as well as the speedy nomination for deletion of an entry for reproductive rights activist Sandra Fluke.   Both pages survived, Wadewicz emphasized, reflecting the way in which Wikipedia guidelines develop through practice, despite their ostensible stability.

    This is important to remember – Wikipedia’s policies, like everything on the site, evolves and changes as the community changes. … There is nothing more essential than seeing that these policies on Wikipedia are evolving and that if we as feminists and academics want them to evolve in ways we feel reflect the progressive politics important to us, we must participate in the conversation. Wikipedia is a community and we have to join it. (Wadewitz 2013)

    While I have offered some pragmatic suggestions here about how to surmount the notability criteria in Wikipedia, I want to close by echoing Wadewitz’s sentiment that the greater challenge must be to question how notability is implemented in Wikipedia praxis.

    _____

    Michelle Moravec is an associate professor of history at Rosemont College.

    Back to the essay

    _____

    Notes

    [1] Seven of the eleven categories in my study with fewer than ten individuals have no missing individuals.

    _____

    Works Cited

  • David Golumbia — The Digital Turn

    David Golumbia — The Digital Turn

    David Golumbia

    Is there, was there, will there be, a digital turn? In (cultural, textual, media, critical, all) scholarship, in life, in society, in politics, everywhere? What would its principles be?

    The short prompt I offered to the contributors to this special issue did not presume to know the answers to these questions.

    That means, I hope, that these essays join a growing body of scholarship and critical writing (much, though not by any means all, of it discussed in the essays that make up this collection) that suspends judgment about certain epochal assumptions built deep into the foundations of too much practice, thought, and even scholarship about just these questions.

    • In “The New Pythagoreans,” Chris Gilliard and Hugh Culik look closely at the long history of Pythagorean mystic belief in the power of mathematics and its near-exact parallels in contemporary promotion of digital technology, and especially surrounding so-called big data.
    • In “From Megatechnic Bribe to Megatechnic Blackmail: Mumford’s ‘Megamachine’ after the Digital Turn,” Zachary Loeb asks about the nature of the literal and metaphorical machines around us via a discussion of the 20th century writer and social critic (and) Lewis Mumford’s work, one of the thinkers who most fully anticipated the digital revolution and understood its likely consequences.
    • In “Digital Proudhonism,” Gavin Mueller writes that “a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.”
    • In “Mapping Without Tools: What the Digital Turn Can Learn from the Cartographic Turn.” Tim Duffy pushes back “against the valorization of ‘tools’ and ‘making’ in the digital turn, particularly its manifestation in digital humanities (DH), by reflecting on illustrative examples of the cartographic turn, which, from its roots in the sixteenth century through to J.B. Harley’s explosive provocation in 1989 (and beyond) has labored to understand the relationship between the practice of making maps and the experiences of looking at and using them.  By considering the stubborn and defining spiritual roots of cartographic research and the way fantasies of empiricism helped to hide the more nefarious and oppressive applications of their work, I hope to provide a mirror for the state of the digital humanities, a field always under attack, always defining and defending itself, and always fluid in its goals and motions.”
    • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling, in “Origin Stories in the Genealogy of Cherokee Language Technology,” argue that “the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code.”
    • In “Artificial Saviors,” tante connects the pseudo-religious and pseudo-scientific rhetoric found at a surprising rate among digital technology developers and enthusiasts: “When AI morphed from idea or experiment to belief system, hackers, programmers, ‘data scientists,’ and software architects became the high priests of a religious movement that the public never identified and parsed as such.”
    • In “The Endless Night of Wikipedia’s Notable Woman Problem,” Michelle Moravec “takes on one of the ‘tests’ used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.”
    • In “The Computational Unconscious,” Jonathan Beller interrogates the “penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing.”
    • In “What Indigenous Literature Can Bring to Electronic Archives,” Siobhan Senier asks, “How can the insights of the more ethnographically oriented Indigenous digital archives inform digital literary collections, and vice versa? How do questions of repatriation, reciprocity, and culturally sensitive contextualization change, if at all, when we consider Indigenous writing?”
    • Rob Hunter provides the following abstract of “The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory”:

      The digital turn is associated with considerable enthusiasm for the democratic or even emancipatory potential of networked computing. Free, libre, and open source (FLOSS) developers and maintainers frequently endorse the claim that the digital turn promotes democracy in the form of improved deliberation and equalized access to information, networks, and institutions. Interpreted in this way, democracy is an ethical practice rather than a form of struggle or contestation. I argue that this depoliticized conception of democracy draws on commitments—regarding personal autonomy, the ethics of intersubjectivity, and suspicion of mass politics—that are also present in recent strands of liberal political thought. Both the rhetorical strategies characteristic of FLOSS as well as the arguments for deliberative democracy advanced within contemporary political theory share similar contradictions and are vulnerable to similar critiques—above all in their pathologization of disagreement and conflict. I identify and examine the contradictions within FLOSS, particularly those between commitments to existing property relations and the championing of individual freedom. I conclude that, despite the real achievements of the FLOSS movement, its depoliticized conception of democracy is self-inhibiting and tends toward quietistic refusals to consider the merits of collective action or the necessity of social critique.

    • John Pat Leary, in “Innovation and the Neoliberal Idioms of Development,” “explores the individualistic, market-based ideology of ‘innovation’ as it circulates from the English-speaking first world to the so-called third world, where it supplements, when it does not replace, what was once more exclusively called ‘development.’” He works “to define the ideology of ‘innovation’ that undergirds these projects, and to dissect the Anglo-American ego-ideal that it circulates. As an ideology, innovation is driven by a powerful belief, not only in technology and its benevolence, but in a vision of the innovator: the autonomous visionary whose creativity allows him to anticipate and shape capitalist markets.”
    • Annemarie Perez, in “UndocuDreamers: Public Writing and the Digital Turn,” writes of a “paradox” she finds in her work with students who belong to communities targeted by recent immigration enforcement crackdowns and the default assumptions about “open” and “public” found in so much digital rhetoric: “My students should write in public. Part of what they are learning in Chicanx studies is about the importance of their voices, of their experiences and their stories are ones that should be told. Yet, given the risks in discussing migration and immigration through the use of public writing, I wonder how I as an instructor should either encourage or discourage students from writing their lives, their experiences as undocumented migrants, experiences which have touched, every aspect of their lives.”
    • Gretchen Soderlund, in “Futures of Journalisms Past (or, Pasts of Journalism’s Future),” looks at discourses of “the future” in journalism from the 19th and 20th centuries, in order to help frame current discourses about journalism’s “digital future,” in part because when “when it comes to technological and economic speedup, journalism may be the canary in the mine.”
    • In “The Singularity in the I790s: Toward a Prehistory of the Present With William Godwin and Thomas Malthus,” Anthony Galluzzo examines the often-misunderstood and misrepresented writings of William Godwin, and also those of Thomas Malthus, to demonstrate how far back in English-speaking political history go the roots of today’s technological Prometheanism, and how destructive it can be, especially for the political left.

    “Digital Turn” Table of Contents

     

     

     

  • Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    a review of Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)

    by Richard Hill

    ~

    Like other books by Milton Mueller, Will the Internet Fragment? is a must-read for anybody who is seriously interested in the development of Internet governance and its likely effects on other walks of life.  This is true because, and not despite, the fact that it is a tract that does not present an unbiased view. On the contrary, it advocates a certain approach, namely a utopian form of governance which Mueller refers to as “popular sovereignty in cyberspace”.

    Mueller, Professor of Information Security and Privacy at Georgia Tech, is an internationally prominent scholar specializing in the political economy of information and communication.  The author of seven books and scores of journal articles, his work informs not only public policy but also science and technology studies, law, economics, communications, and international studies.  His books Networks and States: The Global Politics of Internet Governance (MIT Press, 2010) and Ruling the Root: Internet Governance and the Taming of Cyberspace (MIT Press, 2002) are acclaimed scholarly accounts of the global governance regime emerging around the Internet.

    Most of Will the Internet Fragment? consists of a rigorous analysis of what has been commonly referred to as “fragmentation,” showing that very different technological and legal phenomena have been conflated in ways that do not favour productive discussions.  That so-called “fragmentation” is usually defined as the contrary of the desired situation in which “every device on the Internet should be able to exchange data packets with any other device that is was willing to receive them” (p. 6 of the book, citing Vint Cerf).  But. as Mueller correctly points out, not all end-points of the Internet can reach all other end-points at all times, and there may be very good reasons for that (e.g. corporate firewalls, temporary network outages, etc.).  Mueller then shows how network effects (the fact that the usefulness of a network increases as it becomes larger) will tend to prevent or counter fragmentation: a subset of the network is less useful than is the whole.  He also shows how network effects can prevent the creation of alternative networks: once everybody is using a given network, why switch to an alternative that few are using?  As Mueller aptly points out (pp. 63-66), the slowness of the transition to IPv6 is due to this type of network effect.

    The key contribution of this book is that it clearly identifies the real question of interest to whose who are concerned about the governance of the Internet and its impact on much of our lives.  That question (which might have been a better subtitle) is: “to what extent, if any, should Internet policies be aligned with national borders?”  (See in particular pp. 71, 73, 107, 126 and 145).  Mueller’s answer is basically “as little as possible, because supra-national governance by the Internet community is preferable”.  This answer is presumably motivated by Mueller’s view that “ institutions shift power from states to society” (p. 116), which implies that “society” has little power in modern states.  But (at least ideally) states should be the expression of a society (as Mueller acknowledges on pp. 124 and 136), so it would have been helpful if Mueller had elaborated on the ways (and there are many) in which he believes states do not reflect society and in the ways in which so-called multi-stakeholder models would not be worse and would not result in a denial of democracy.

    Before commenting on Mueller’s proposal for supra-national governance, it is worth commenting on some areas where a more extensive discussion would have been warranted.  We note, however, that the book the book is part of a series that is deliberately intended to be short and accessible to a lay public.  So Mueller had a 30,000 word limit and tried to keep things written in a way that non-specialists and non-scholars could access.  This no doubt largely explains why he didn’t cover certain topics in more depth.

    Be that as it may, the discussion would have been improved by being placed in the long-term context of the steady decrease in national sovereignty that started in 1648, when sovereigns agreed in the Treaty of Westphalia to refrain from interfering in the religious affairs of foreign states, , and that accelerated in the 20th century.  And by being placed in the short-term context of the dominance by the USA as a state (which Mueller acknowledges in passing on p. 12), and US companies, of key aspects of the Internet and its governance.  Mueller is deeply aware of the issues and has discussed them in his other books, in particular Ruling the Root and Networks and States, so it would have been nice to see the topic treated here, with references to the end of the Cold War and what appears to be re-emergence of some sort of equivalent international tension (albeit not for the same reasons and with different effects at least for what concerns cyberspace).  It would also have been preferable to include at least some mention of the literature on the negative economic and social effects of current Internet governance arrangements.

     Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)It is telling that, in Will the Internet Fragment?, Mueller starts his account with the 2014 NetMundial event, without mentioning that it took place in the context of the outcomes of the World Summit of the Information Society (WSIS, whose genesis, dynamics, and outcomes Mueller well analyzed in Networks and States), and without mentioning that the outcome document of the 2015 UN WSIS+10 Review reaffirmed the WSIS outcomes and merely noted that Brazil had organized NetMundial, which was, in context, an explicit refusal to note (much less to endorse) the NetMundial outcome document.

    The UN’s reaffirmation of the WSIS outcomes is significant because, as Mueller correctly notes, the real question that underpins all current discussions of Internet governance is “what is the role of states?,” and the Tunis Agenda states: “Policy authority for Internet-related public policy issues is the sovereign right of States. They have rights and responsibilities for international Internet-related public policy issues.”

    Mueller correctly identifies and discusses the positive externalities created by the Internet (pp. 44-48).  It would have been better if he had noted that there are also negative externalities, in particular regarding security (see section 2.8 of my June 2017 submission to ITU’s CWG-Internet), and that the role of states includes internalizing such externalities, as well as preventing anti-competitive behavior.

    It is also telling the Mueller never explicitly mentions a principle that is no longer seriously disputed, and that was explicitly enunciated in the formal outcome of the WSIS+10 Review, namely that offline law applies equally online.  Mueller does mention some issues related to jurisdiction, but he does not place those in the context of the fundamental principle that cyberspace is subject to the same laws as the rest of the world: as Mueller himself acknowledges (p. 145), allegations of cybercrime are judged by regular courts, not cyber-courts, and if you are convicted you will pay a real fine or be sent to a real prison, not to a cyber-prison.  But national jurisdiction is not just about security (p. 74 ff.), it is also about legal certainty for commercial dealings, such as enforcement of contracts.  There are an increasing number of activities that depend on the Internet, but that also depend on the existence of known legal regimes that can be enforced in national courts.

    And what about the tension between globalization and other values such as solidarity and cultural diversity?  As Mueller correctly notes (p. 10), the Internet is globalization on steroids.  Yet cultural values differ around the world (p. 125).  How can we get the benefits of both an unfragmented Internet and local cultural diversity (as opposed to the current trend to impose US values on the rest of the world)?

    While dealing with these issues in more depth would have complicated the discussion, it also would have made it more valuable, because the call for direct rule of the Internet by and for Internet users must either be reconciled with the principle that offline law applies equally online, or be combined with a reasoned argument for the abandonment of that principle.  As Mueller so aptly puts it (p. 11): “Internet governance is hard … also because of the mismatch between its global scope and the political and legal institutions for responding to societal problems.”

    Since most laws, and almost all enforcement mechanisms are national, the influence of states on the Internet is inevitable.  Recall that the idea of enforceable rules (laws) dates back to at least 1700 BC and has formed an essential part of all civilizations in history.  Mueller correctly posits on p. 125 that a justification for territorial sovereignty is to restrict violence (only the state can legitimately exercise it), and wonders why, in that case, the entire world does not have a single government.  But he fails to note that, historically, at times much of the world was subject to a single government (think of the Roman Empire, the Mongol Empire, the Holy Roman Empire, the British Empire), and he does not explore the possibility of expanding the existing international order (treaties, UN agencies, etc.) to become a legitimate democratic world governance (which of course it is not, in part because the US does not want it to become one).  For example, a concrete step in the direction of using existing governance systems has recently been proposed by Microsoft: a Digital Geneva Convention.

    Mueller explains why national borders interfere with certain aspects of certain Internet activities (pp. 104, 106), but national borders interfere with many activities.  Yet we accept them because there doesn’t appear to be any “least worst” alternative.  Mueller does acknowledge that states have power, and rightly calls for states to limit their exercise of power to their own jurisdiction (p. 148).  But he posits that such power “carries much less weight than one would think” (p. 150), without justifying that far-reaching statement.  Indeed, Mueller admits that “it is difficult to conceive of an alternative” (p. 73), but does not delve into the details sufficiently to show convincingly how the solution that he sketches would not result in greater power by dominant private companies (and even corpotocracy or corporatism), increasing income inequality, and a denial of democracy.  For example, without the power of state in the form of consumer protection measures, how can one ensure that private intermediaries would “moderate content based on user preferences and reports” (p. 147) as opposed to moderating content so as to maximize their profits?  Mueller assumes that there would be a sufficient level of competition, resulting in self-correcting forces and accountability (p. 129); but current trends are just the opposite: we see increasing concentration and domination in many aspects of the Internet (see section 2.11 of my June 2017 submission to ITU’s CWG-Internet) and some competition law authorities have found that some abuse of dominance has taken place.

    It seems to me that Mueller too easily concludes that “a state-centric approach to global governance cannot easily co-exist with a multistakeholder regime” (p. 117), without first exploring the nuances of multi-stakeholder regimes and the ways that they could interface with existing institutions, which include intergovernmental bodies as well as states.  As I have stated elsewhere: “The current arrangement for global governance is arguably similar to that of feudal Europe, whereby multiple arrangements of decision-making, including the Church, cities ruled by merchant-citizens, kingdoms, empires and guilds co-existed with little agreement as to which actor was actually in charge over a given territory or subject matter.  It was in this tangled system that the nation-state system gained legitimacy precisely because it offered a clear hierarchy of authority for addressing issues of the commons and provision of public goods.”

    Which brings us to another key point that Mueller does not consider in any depth: if the Internet is a global public good, then its governance must take into account the views and needs of all the world’s citizens, not just those that are privileged enough to have access at present.  But Mueller’s solution would restrict policy-making to those who are willing and able to participate in various so-called multi-stakeholder forums (apparently Mueller does not envisage a vast increase in participation and representation in these; p. 120).  Apart from the fact that that group is not a community in any real sense (a point acknowledged on p. 139), it comprises, at present, only about half of humanity, and even much of that half would not be able to participate because discussions take place primarily in English, and require significant technical knowledge and significant time commitments.

    Mueller’s path for the future appears to me to be a modern version of the International Ad Hoc Committee (IAHC), but Mueller would probably disagree, since he is of the view that the IAHC was driven by intergovernmental organizations.  In any case, the IAHC work failed to be seminal because of the unilateral intervention of the US government, well described in Ruling the Root, which resulted in the creation of ICANN, thus sparking discussions of Internet governance in WSIS and elsewhere.  While Mueller is surely correct when he states that new governance methods are needed (p. 127), it seems a bit facile to conclude that “the nation-state is the wrong unit” and that it would be better to rely largely on “global Internet governance institutions rooted in non-state actors” (p. 129), without explaining how such institutions would be democratic and representative of all of the word’s citizens.

    Mueller correctly notes (p. 150) that, historically, there have major changes in sovereignty: emergence and falls of empires, creation of new nations, changes in national borders, etc.  But he fails to note that most of those changes were the result of significant violence and use of force.  If, as he hopes, the “Internet community” is to assert sovereignty and displace the existing sovereignty of states, how will it do so?  Through real violence?  Through cyber-violence?  Through civil disobedience (e.g. migrating to bitcoin, or implementing strong encryption no matter what governments think)?  By resisting efforts to move discussions into the World Trade Organization? Or by persuading states to relinquish power willingly?  It would have been good if Mueller had addressed, at least summarily, such questions.

    Before concluding, I note a number of more-or-less minor errors that might lead readers to imprecise understandings of important events and issues.  For example, p. 37 states that “the US and the Internet technical community created a global institution, ICANN”: in reality, the leaders of the Internet technical community obeyed the unilateral diktat of the US government (at first somewhat reluctantly and later willingly) and created a California non-profit company, ICANN.  And ICANN is not insulated from jurisdictional differences; it is fully subject to US laws and US courts.  The discussion on pp. 37-41 fails to take into account the fact that a significant portion of the DNS, the ccTLDs, is already aligned with national borders, and that there are non-national telephone numbers; the real differences between the DNS and telephone numbers are that most URLs are non-national, whereas few telephone numbers are non-national; that national telephone numbers are given only to residents of the corresponding country; and that there is an international real-time mechanism for resolving URLs that everybody uses, whereas each telephone operator has to set up its own resolving mechanism for telephone numbers.  Page 47 states that OSI was “developed by Europe-centered international organizations”, whereas actually it was developed by private companies from both the USA (including AT&T, Digital Equipment Corporation, Hewlett-Packard, etc.) and Europe working within global standards organizations (IEC, ISO, and ITU), who all happen to have secretariats in Geneva, Switzerland; whereas the Internet was initially developed and funded by an arm of the US Department of Defence and the foundation of the WWW was initially developed in a European intergovernmental organization.  Page 100 states that “The ITU has been trying to displace or replace ICANN since its inception in 1998”; whereas a correct statement would be “While some states have called for the ITU to displace or replace ICANN since its inception in 1998, such proposals have never gained significant support and appear to have faded away recently.”  Not everybody thinks that the IANA transition was a success (p. 117), nor that it is an appropriate model for the future (pp. 132-135; 136-137), and it is worth noting that ICANN successfully withstood many challenges (p. 100) while it had a formal link to the US government; it remains to be seen how ICANN will fare now that it is independent of the US government.  ICANN and the RIR’s do not have a “‘transnational’ jurisdiction created through private contracts” (p. 117); they are private entities subject to national law and the private contracts in question are also subject to national law (and enforced by national authorities, even if disputes are resolved by international arbitration).  I doubt that it is a “small step from community to nation” (p. 142), and it is not obvious why anti-capitalist movements (which tend to be internationalist) would “end up empowering territorial states and reinforcing alignment” (p. 147), when it is capitalist movements that rely on the power of territorial states to enforce national laws, for example regarding intellectual property rights.

    Despite these minor quibbles, this book, and its references (albeit not as extensive as one would have hoped), will be a valuable starting point for future discussions of internet alignment and/or “fragmentation.” Surely there will be much future discussion, and many more analyses and calls for action, regarding what may well be one of the most important issues that humanity now faces: the transition from the industrial era to the information era and the disruptions arising from that transition.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    Quinn DuPont – Ubiquitous Computing, Intermittent Critique

    a review of Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg, eds., Ubiquitous Computing, Complexity, and Culture (Routledge 2016)

    by Quinn DuPont

    ~

    It is a truism today that digital technologies are ubiquitous in Western society (and increasingly so for the rest of the globe). With this ubiquity, it seems, comes complexity. This is the gambit of Ubiquitous Computing, Complexity, and Culture (Routledge 2016), a new volume edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg.

    There are of course many ways to approach such a large and important topic: from the study of political economy, technology (sometimes leaning towards technological determinism or instrumentalism), discourse and rhetoric, globalization, or art and media. This collection focuses on art and media. In fact, only a small fraction of the chapters do not deal either entirely or mostly with art, art practices, and artists. Similarly, the volume includes a significant number of interviews with artists (six out of the forty-three chapters and editorial introductions). This focus on art and media is both the volume’s strength, and one of its major weaknesses.

    By focusing on art, Ubiquitous Computing, Complexity, and Culture pushes the bounds of how we might commonly understand contemporary technology practice and development. For example, in their chapter, Dietmar Offenhuber and Orkan Telhan develop a framework for understanding, and potentially deploying, indexical visualizations for complex interfaces. Offenhuber and Telhan use James Turrell’s art installation Meeting as an example of the conceptual shortening of causal distance between object and representation, as a kind of Peircean index, and one such way to think about systems of representation. Another example of theirs, Natalie Jermijenko’s One Trees installation of one hundred cloned trees, strengthens and complicates the idea of the causal index, since the trees are from identical genetic stock, yet develop in natural and different ways. The uniqueness of the fully-grown trees is a literal “visualization” of their different environments, not unlike a seismograph, a characteristic indexical visualization technology. From these examples, Offenhuber and Telhan conclude that indexical visualizations may offer a fruitful “set of constraints” (300) that the information designer might draw on when developing new interfaces that deal with massive complexity. Many other examples and interrogations of art and art practices throughout the chapters offer unexpected and penetrating analysis into facets of ubiquitous and complex technologies.

    James Turrell, Meeting 2016
    MoMA PS1 | James Turrell, Meeting 2016, Photos by Pablo Enriquez

    A persistent challenge with art and media analyses of digital technology and computing, however, is that the familiar and convenient epistemological orientation, and the ready comparisons that result, are often to film, cinema, and theater. Studies reliant on this epistemology tend to make a range of interesting yet ultimately illusory observations, which fail to explain the richness and uniqueness of modern information technologies. In my opinion, there are many important ways that film, cinema, and theater are simply not like modern digital technologies. Such an epistemological orientation is, arguably, a consequence of the history of disciplinary allegiances—symptomatic of digital studies and new media studies originating from screen studies—and a proximate cause of Lev Manovich’s agenda-setting Language of New Media (2001), which relished in the mimetic connections resulting from the historical quirk that the most obvious computing technologies tend to have screens.

    Because of this orientation, some of the chapters fail to critically engage with technologies, events, and practices largely affecting lived society. A very good artwork may go a long way to exposing social and political activities that might otherwise be invisible or known only to specialists, but it is the role of the critic and the academic to concretize these activities, and draw thick connections between art and “conventional” social issues. Concrete specificity, while avoiding reductionist traps, is the key to avoiding what amounts to belated criticism.

    This specificity about social issues might come in the form of engagement with normative aspects of ubiquitous and complex digital technologies. Instead of explaining why surveillance is a feature of modern life (as several chapters do, which is, by now, well-worn academic ground), it might be more useful to ask why consumers and policy-makers alike have turned so quickly to privacy-enhancing technologies as a solution (to be sold by the high-technology industry). In a similar vein, unsexy aspects of wearable technologies (accessibility) now offer potential assistance and perceptual, physical, or cognitive enhancement (as described in Ellis and Goggin’s chapter), alongside unprecedented surveillance and monetization opportunities. Digital infrastructures—both active and failing—now drive a great deal of modern society, but despite their ubiquity, they are hard to see, and therefore, tend not to get much attention. These kinds of banal and invisible—ubiquitous—cases tend not to be captured in the boundary-pushing work of artists, and are underrepresented (though not entirely absent) in the analyses here.

    A number of chapters also trade on old canards, such as worrying about information overload, “junk” data whizzing across the Internet, time “wasted” online, online narcissism, business models based on solely on data collection, and “declining” privacy. To the extent that any of these things are empirically true—when viewed contextually and precisely—is somewhat beside the point if we are not offered new analyses or solutions. Otherwise, these kinds of criticisms run the risk of sounding like old people nostalgically complaining about an imagined world before technological or informational ubiquity and complexity. “Traditional” human values might be an important form of study, but not as the pile-on Left-leaning liberal romanticism prevalent in far too many humanistic inquiries of the digital.

    Another issue is that some of the chapters seem to be oddly antiquated for a book published in 2016. As we all know, the publication of edited collections can often take longer than anyone would like, but for several chapters, the examples, terminology, and references feel unusually dated. These dated chapters do not necessarily have the advantage of critical distance (in the way that properly historical study does), and neither do they capture the pulse of the current situation—they just feel old.

    Before turning to a sample of the truly excellent chapters in this volume, I must pause to make a comment about the book’s physical production. On the back cover, Jussi Parikka calls Ubiquitous Computing, Complexity, and Culture a “massively important volume.” This assessment might have been simplified by just calling it “a massive volume.” Indeed, using some back-of-the-napkin calculations, the 406 dense pages amounts to about 330,000 words. Like cheesecake, sometimes a little bit of something is better than a lot. And, while such a large book might seem like good value, the pragmatics of putting an estimated 330,000 words into a single volume requires considerable care to typesetting and layout, which unfortunately is not the case here. At about 90 characters per line, and 46 lines per page—all set in a single column—the tiny text set on extremely long lines strains even this relatively young reviewer’s eyes and practical comprehension. When trudging through already-dense theory and the obfuscated rhetoric that typically accompanies it (common in this edited collection), the reading experience is often painful. On the positive side, in the middle of the 406 pages of text there are an additional 32 pages of full-color plates, a nice addition and an effective way to highlight the volume’s sympathies in art and media. An extensive index is also included.

    Despite my criticisms of the approach of many of the chapters, the book’s typesetting and layout, and the editors’ decision to attempt to collocate so much material in a single volume, there are a number of outstanding chapters, which more than redeem any other weaknesses.

    Elaborating on a theme from her 2011 book Programmed Visions (MIT), Wendy H.K. Chun describes why memory, and the ability to forget, is an important aspect to Mark Weiser’s original notion of ubiquitous computing (in his 1991 Scientific American article). (Chun also notes that the word “ubiquitous” comes from “Ubiquitarians,” a Lutherans sect who believed Christ was present ‘everywhere at once’ and therefore invisible.) According to Chun’s reading of Weiser, to get to a state of ubiquitous computing, machines must lose their individualized identity or importance. Therefore, unindividuated computers had to remember, by tracking users, so that users could correspondingly forget (about the technology) and “thus think and live” (161). The long history of computer memory, and its rhetorical emergence out of technical “storage” is an essential aspect to the origins of our current technological landscape. Chun notes that prior to the EDVAC machine (and its strategic alignment to cognitive models of computation), storage was a well understood word, which etymologically suggested an orientation to the future (“stores look toward a future”). Memory, on the other hand, contained within it the act of recall and repetition (recall Meno’s slave in Plato’s dialogue). So, when EDVAC embedded memory within the machine, it changed “memory by making memory storage” (162). In doing so, if we wanted to rehabilitate Weiser’s original image, of being able to “think and live,” we would need to refuse the “deadening of the world brought about by memory as storage and realize the fundamentally collective nature of memory and writing” (162).

    Sean Cubitt does an excellent job of exposing the political economy of ubiquitous technologies by focusing on the ways that enclosure and externalization occur in information environments, interrogating the term “information economy.” Cubitt traces the history of enclosures from the alienation of fifteenth-century peasants from their land, the enclosure of skills to produce dead labour in nineteenth-century factories, to the conversion of knowledge into information today, which is subsequently stored in databases and commercialized as intellectual property—alienating individuals from their own knowledge. Accompanying this process are a range of externalizations, predominantly impacting the poor and the indigenous. One of the insightful examples Cubitt offers of this process of externalization is the regulation of radio spectrum in New Zealand, and the subsequent challenge by Maori people who, under the Waitangi Treaty, are entitled to “all forms of commons that pre-existed the European arrival” (218). According to the Maori, radio spectrum is a form of commons, and therefore, the New Zealand government is not permitted to claim exclusive authority to manage the spectrum (as practically all Western governments do). Not content to simply offer critique, Cubitt concludes his chapter with a (very) brief discussion of potential solutions, focusing on the reimagining of peer-to-peer technology by Robert Verzola of the Philippines Green Party. Peer to peer technology, Cubitt tentatively suggests, may help reassert the commons as commonwealth, which might even salvage traditional knowledge from information capitalism.

    Katie Ellis and Gerard Goggin discuss the mechanisms of locative technologies for differently-abled people. Ellis and Goggin conclude that devices like the later-model iPhone (not the first release), and the now-maligned Google Glass offer unique value propositions for those engaged in a spectrum of impairment and “complex disability effects” (274). For people who rely on these devices for day-to-day assistance and wayfinding, these devices are ubiquitous in the sense Weiser originally imagined—disappearing from view and becoming integrated into individual lifeworlds.

    John Johnston ends the volume as strongly as N. Katherine Hayles’s short foreword opened it, describing the dynamics of “information events” in a world of viral media, big data, and, as he elaborates in an extended example, complex and high-speed financial instruments. Johnston describes how events like the 2010 “Flash Crash,” when the Dow fell nearly a thousand points and lost a trillion dollars and rebounded within five minutes, are essentially uncontrollable and unpredictable. This narrative, Johnston points out, has been detailed before, but Johnston twists the narrative and argues that such a financial system, in its totality, may be “fundamentally resistant to stability and controllability” (389). The reason for this fundamental instability and uncontrollability is that the financial market cannot be understood as a systematic, efficient system of exchange events, which just happens to be problematically coded by high-frequency, automated, and limit-driven technologies today. Rather, the financial market is a “series of different layers of coded flows that are differentiated according to their relative power” (390). By understanding financialization as coded flows, of both power and information, we gain new insight into critical technology that is both ubiquitous and complex.

    _____

    Quinn DuPont studies the roles of cryptography, cybersecurity, and code in society, and is an active researcher in digital studies, digital humanities, and media studies. He also writes on Bitcoin, cryptocurrencies, and blockchain technologies, and is currently involved in Canadian SCC/ISO blockchain standardization efforts. He has nearly a decade of industry experience as a Senior Information Specialist at IBM, IT consultant, and usability and experience designer.

    Back to the essay

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    Ending the World as We Know It: Alexander R. Galloway in Conversation with Andrew Culp

    by Alexander R. Galloway and Andrew Culp
    ~

    Alexander R. Galloway: You have a new book called Dark Deleuze (University of Minnesota Press, 2016). I particularly like the expression “canon of joy” that guides your investigation. Can you explain what canon of joy means and why it makes sense to use it when talking about Deleuze?

    Andrew Culp, Dark Deleuze (University of Minnesota Press, 2016)

    Andrew Culp: My opening is cribbed from a letter Gilles Deleuze wrote to philosopher and literary critic Arnaud Villani in the early 1980s. Deleuze suggests that any worthwhile book must have three things: a polemic against an error, a recovery of something forgotten, and an innovation. Proceeding along those three lines, I first argue against those who worship Deleuze as the patron saint of affirmation, second I rehabilitate the negative that already saturates his work, and third I propose something he himself was not capable of proposing, a “hatred for this world.” So in an odd twist of Marx on history, I begin with those who hold up Deleuze as an eternal optimist, yet not to stand on their shoulders but to topple the church of affirmation.

    The canon portion of “canon of joy” is not unimportant. Perhaps more than any other recent thinker, Deleuze queered philosophy’s line of succession. A large portion of his books were commentaries on outcast thinkers that he brought back from exile. Deleuze was unwilling to discard Nietzsche as a fascist, Bergson as a spiritualist, or Spinoza as a rationalist. Apparently this led to lots of teasing by fellow agrégation students at the Sorbonne in the late ’40s. Further showing his strange journey through the history of philosophy, his only published monograph for nearly a decade was an anti-transcendental reading of Hume at a time in France when phenomenology reigned. Such an itinerant path made it easy to take Deleuze at his word as a self-professed practitioner of “minor philosophy.” Yet look at Deleuze’s outcasts now! His initiation into the pantheon even bought admission for relatively forgotten figures such as sociologist Gabriel Tarde. Deleuze’s popularity thus raises a thorny question for us today: how do we continue the minor Deleuzian line when Deleuze has become a “major thinker”? For me, the first step is to separate Deleuze (and Guattari) from his commentators.

    I see two popular joyous interpretations of Deleuze in the canon: unreconstructed Deleuzians committed to liberating flows, and realists committed to belief in this world. The first position repeats the language of molecular revolution, becoming, schizos, transversality, and the like. Some even use the terms without transforming them! The resulting monotony seals Deleuze and Guattari’s fate as a wooden tongue used by people still living in the ’80s. Such calcification of their concepts is an especially grave injustice because Deleuze quite consciously shifted terminology from book to book to avoid this very outcome. Don’t get me wrong, I am deeply indebted to the early work on Deleuze! I take my insistence on the Marxo-Freudian core of Deleuze and Guattari from one of their earliest Anglophone commentators, Eugene Holland, who I sought out to direct my dissertation. But for me, the Tiqqun line “the revolution was molecular, and so was the counter-revolution” perfectly depicts the problem of advocating molecular politics. Why? Today’s techniques of control are now molecular. The result is that control societies have emptied the molecular thinker’s only bag of tricks (Bifo is a good test case here), which leaves us with a revolution that only goes one direction: backward.

    I am equally dissatisfied by realist Deleuzians who delve deep into the early strata of A Thousand Plateaus and away from the “infinite speed of thought” that motivates What is Philosophy? I’m thinking of the early incorporations of dynamical systems theory, the ’90s astonishment over everything serendipitously looking like a rhizome, the mid-00s emergence of Speculative Realism, and the ongoing “ontological” turn. Anyone who has read Manuel DeLanda will know this exact dilemma of materiality versus thought. He uses examples that slow down Deleuze and Guattari’s concepts to something easily graspable. In his first book, he narrates history as a “robot historian,” and in A Thousand Years of Nonlinear History, he literally traces the last thousand years of economics, biology, and language back to clearly identifiable technological inventions. Such accounts are dangerously compelling due to their lucidity, but they come at a steep cost: android realism dispenses with Deleuze and Guattari’s desiring subject, which is necessary for a theory of revolution by way of the psychoanalytic insistence on the human ability to overcome biological instincts (e.g. Freud’s Instincts and their Vicissitudes and Beyond the Pleasure Principle). Realist interpretations of Deleuze conceive of the subject as fully of this world. And with it, thought all but evaporates under the weight of this world. Deleuze’s Hume book is an early version of this criticism, but the realists have not taken heed. Whether emergent, entangled, or actant, strong realists ignore Deleuze and Guattari’s point in What is Philosophy? that thought always comes from the outside at a moment when we are confronted by something so intolerable that the only thing remaining is to think.

    Galloway: The left has always been ambivalent about media and technology, sometimes decrying its corrosive influence (Frankfurt School), sometimes embracing its revolutionary potential (hippy cyberculture). Still, you ditch technical “acceleration” in favor of “escape.” Can you expand your position on media and technology, by way of Deleuze’s notion of the machinic?

    Culp: Foucault says that an episteme can be grasped as we are leaving it. Maybe we can finally catalogue all of the contemporary positions on technology? The romantic (computer will never capture my soul), the paranoiac (there is an unknown force pulling the strings), the fascist-pessimist (computers will control everything)…

    Deleuze and Guattari are certainly not allergic to technology. My favorite quote actually comes from the Foucault book in which Deleuze says that “technology is social before it is technical” (6). The lesson we can draw from this is that every social formation draws out different capacities from any given technology. An easy example is from the nomads Deleuze loved so much. Anarcho-primitivists speculate that humans learn oppression with the domestication of animals and settled agriculture during the Neolithic Revolution. Diverging from the narrative, Deleuze celebrates the horse people of the Eurasian steppe described by Arnold Toynbee. Threatened by forces that would require them to change their habitat, Toynbee says, they instead chose to change their habits. The subsequent domestication of the horse did not sew the seeds of the state, which was actually done by those who migrated from the steppes after the last Ice Age to begin wet rice cultivation in alluvial valleys (for more, see James C Scott’s The Art of Not Being Governed). On the contrary, the new relationship between men and horses allowed nomadism to achieve a higher speed, which was necessary to evade the raiding-and-trading used by padi-states to secure the massive foreign labor needed for rice farming. This is why the nomad is “he who does not move” and not a migrant (A Thousand Plateaus, 381).

    Accelerationism attempts to overcome the capitalist opposition of human and machine through the demand for full automation. As such, it peddles in technological Proudhonism that believes one can select what is good about technology and just delete what is bad. The Marxist retort is that development proceeds by its bad side. So instead of flashy things like self-driving cars, the real dot-communist question is: how will Amazon automate the tedious, low-paying jobs that computers are no good at? What happens to the data entry clerks, abusive-content managers, or help desk technicians? Until it figures out who will empty the recycle bin, accelerationism is only a socialism of the creative class.

    The machinic is more than just machines–it approaches technology as a question of organization. The term is first used by Guattari in a 1968 paper titled “Machine and Structure” that he presented to Lacan’s Freudian School of Paris, a paper that would jumpstart his collaboration with Deleuze. He argues for favoring machine to structure. Structures transform parts of a whole by exchanging or substituting particularities so that every part shares in a general form (in other words, the production of isomorphism). An easy political example is the Leninist Party, which mediates the particularized private interests to form them into the general will of a class. Machines instead treat the relationship between things as a problem of communication. The result is the “control and communication” of Norbert Wiener’s cybernetics, which connects distinct things in a circuit instead of implanting a general logic. The word “machine” never really caught on but the concept has made inroads in the social sciences, where actor-network theory, game theory, behaviorism, systems theory, and other cybernetic approaches have gained acceptance.

    Structure or machine, each engenders a different type of subjectivity, and each realizes a different model of communication. The two are found in A Thousand Plateaus, where Deleuze and Guattari note two different types of state subject formation: social subjection and machinic enslavement (456-460). While it only takes up a few short pages, the distinction is essential to Bernard Stiegler’s work and has been expertly elaborated by Maurizio Lazzarato in the book Signs and Machines. We are all familiar with molar social subjection synonymous with “agency”–it is the power that results from individuals bridging the gap between themselves and broader structures of representation, social roles, and institutional demands. This subjectivity is well outlined by Lacanians and other theorists of the linguistic turn (Virno, Rancière, Butler, Agamben). Missing from their accounts is machinic enslavement, which treats people as simply cogs in the machine. Such subjectivity is largely overlooked because it bypasses existential questions of recognition or self-identity. This is because machinic enslavement operates at the level of the infra-social or pre-individual through the molecular operators of unindividuated affects, sensations, desires not assigned to a subject. Offering a concrete example, Deleuze and Guattari reference Mumford’s megamachines of surplus societies that create huge landworks by treating humans as mere constituent parts. Capitalism revived the megamachine in the sixteenth century, and more recently, we have entered the “third age” of enslavement marked by the development of cybernetic and informational machines. In place of the pyramids are technical machines that use humans at places in technical circuits where computers are incapable or too costly, e.g. Amazon’s Mechanical Turk.

    I should also clarify that not all machines are bad. Rather, Dark Deleuze only trusts one kind of machine, the war machine. And war machines follow a single trajectory–a line of flight out of this world. A major task of the war machine conveniently aligns with my politics of techno-anarchism: to blow apart the networks of communication created by the state.

    Galloway: I can’t resist a silly pun, cannon of joy. Part of your project is about resisting a certain masculinist tendency. Is that a fair assessment? How do feminism and queer theory influence your project?

    Culp: Feminism is hardwired into the tagline for Dark Deleuze through a critique of emotional labor and the exhibition of bodies–“A revolutionary Deleuze for today’s digital world of compulsory happiness, decentralized control, and overexposure.” The major thread I pull through the book is a materialist feminist one: something intolerable about this world is that it demands we participate in its accumulation and reproduction. So how about a different play on words: Sara Ahmed’s feminist killjoy, who refuses the sexual contract that requires women to appear outwardly grateful and agreeable? Or better yet, Joy Division? The name would associate the project with post-punk, its conceptual attack on the mainstream, and the band’s nod to the sexual labor depicted in the novella House of Dolls.

    My critique of accumulation is also a media argument about connection. The most popular critics of ‘net culture are worried that we are losing ourselves. So on the one hand, we have Sherry Turkle who is worried that humans are becoming isolated in a state of being “alone-together”; and on the other, there is Bernard Stiegler, who thinks that the network supplants important parts of what it means to be human. I find this kind of critique socially conservative. It also victim-blames those who use social media the most. Recall the countless articles attacking women who take selfies as part of self-care regimen or teens who creatively evade parental authority. I’m more interested in the critique of early ’90s ‘net culture and its enthusiasm for the network. In general, I argue that network-centric approaches are now the dominant form of power. As such, I am much more interested in how the rhizome prefigures the digitally-coordinated networks of exploitation that have made Apple, Amazon, and Google into the world’s most powerful corporations. While not a feminist issue on its face, it’s easy to see feminism’s relevance when we consider the gendered division of labor that usually makes women the employees of choice for low-paying jobs in electronics manufacturing, call centers, and other digital industries.

    Lastly, feminism and queer theory explicitly meet in my critique of reproduction. A key argument of Deleuze and Guattari in Anti-Oedipus is the auto-production of the real, which is to say, we already live in a “world without us.” My argument is that we need to learn how to hate some of the things it produces. Of course, this is a reworked critique of capitalist alienation and exploitation, which is a system that gives to us (goods and the wage) only because it already stole them behind our back (restriction from the means of subsistence and surplus value). Such ambivalence is the everyday reality of the maquiladora worker who needs her job but may secretly hope that all the factories burn to the ground. Such degrading feelings are the result of the compromises we make to reproduce ourselves. In the book, I give voice to them by fusing together David Halperin and Valerie Traub’s notion of gay shame acting as a solvent to whatever binds us to identity and Deleuze’s shame at not being able to prevent the intolerable. But feeling shame is not enough. To complete the argument, we need to draw out the queer feminist critique of reproduction latent in Marx and Freud. Détourning an old phrase: direct action begins at the point of reproduction. My first impulse is to rely on the punk rock attitude of Lee Edelman and Paul Preciado’s indictment of reproduction. But you are right that they have their masculinist moments, so what we need is something more post-punk–a little less aggressive and a lot more experimental. Hopefully Dark Deleuze is that.

    Galloway: Edelman’s “fuck Annie” is one of the best lines in recent theory. “Fuck the social order and the Child in whose name we’re collectively terrorized; fuck Annie; fuck the waif from Les Mis; fuck the poor, innocent kid on the Net; fuck Laws both with capital ls and small; fuck the whole network of Symbolic relations and the future that serves as its prop” (No Future, 29). Your book claims, in essence, that the Fuck Annies are more interesting than the Aleatory Materialists. But how can we escape the long arm of Lucretius?

    Culp: My feeling is that the politics of aleatory materialism remains ambiguous. Beyond the literal meaning of “joy,” there are important feminist takes on the materialist Spinoza of the encounter that deserve our attention. Isabelle Stengers’s work is among the most comprehensive, though the two most famous are probably Donna Haraway’s cyborg feminism and Karen Barad’s agential realism. Curiously, while New Materialism has been quite a boon for the art and design world, its socio-political stakes have never been more uncertain. One would hope that appeals to matter would lend philosophical credence to topical events such as #blacklivesmatter. Yet for many, New Materialism has simply led to a new formalism focused on material forms or realist accounts of physical systems meant to eclipse the “epistemological excesses” of post-structuralism. This divergence was not lost on commentators in the most recent issue of of October, which functioned as a sort of referendum on New Materialism. On the hand, the issue included a generous accounting of the many avenues artists have taken in exploring various “new materialist” directions. Of those, I most appreciated Mel Chen’s reminder that materialism cannot serve as a “get out of jail free card” on the history of racism, sexism, ablism, and speciesism. While on the other, it included the first sustained attack on New Materialism by fellow travelers. Certainly the New Materialist stance of seeing the world from the perspective of “real objects” can be valuable, but only if it does not exclude old materialism’s politics of labor. I draw from Deleuzian New Materialist feminists in my critique of accumulation and reproduction, but only after short-circuiting their world-building. This is a move I learned from Sue Ruddick, whose Theory, Culture & Society article on the affect of the philosopher’s scream is an absolute tour de force. And then there is Graham Burnett’s remark that recent materialisms are like “Etsy kissed by philosophy.” The phrase perfectly crystallizes the controversy, but it might be too hot to touch for at least a decade…

    Galloway: Let’s focus more on the theme of affirmation and negation, since the tide seems to be changing. In recent years, a number of theorists have turned away from affirmation toward a different set of vectors such as negation, eclipse, extinction, or pessimism. Have we reached peak affirmation?

    Culp: We should first nail down what affirmation means in this context. There is the metaphysical version of affirmation, such as Foucault’s proud title as a “happy positivist.” In this declaration in Archaeology of Knowledge and “The Order of Discourse,” he is not claiming to be a logical positivist. Rather, Foucault is distinguishing his approach from Sartrean totality, transcendentalism, and genetic origins (his secondary target being the reading-between-the-lines method of Althusserian symptomatic reading). He goes on to formalize this disagreement in his famous statement on the genealogical method, “Nietzsche, Genealogy, History.” Despite being an admirer of Sartre, Deleuze shares this affirmative metaphysics with Foucault, which commentators usually describe as an alternative to the Hegelian system of identity, contradiction, determinate negation, and sublation. Nothing about this “happily positivist” system forces us to be optimists. In fact, it only raises the stakes for locating how all the non-metaphysical senses of the negative persist.

    Affirmation could be taken to imply a simple “more is better” logic as seen in Assemblage Theory and Latourian Compositionalism. Behind this logic is a principle of accumulation that lacks a theory of exploitation and fails to consider the power of disconnection. The Spinozist definition of joy does little to dispel this myth, but it is not like either project has revolutionary political aspirations. I think we would be better served to follow the currents of radical political developments over the last twenty years, which have been following an increasingly negative path. One part of the story is a history of failure. The February 15, 2003 global demonstration against the Iraq War was the largest protest in history but had no effect on the course of the war. More recently, the election of democratic socialist governments in Europe has done little to stave off austerity, even as economists publicly describe it as a bankrupt model destined to deepen the crisis. I actually find hope in the current circuit of struggle and think that its lack of alter-globalization world-building aspirations might be a plus. My cues come from the anarchist black bloc and those of the post-Occupy generation who would rather not pose any demands. This is why I return to the late Deleuze of the “control societies” essay and his advice to scramble the codes, to seek out spaces where nothing needs to be said, and to establish vacuoles of non-communication. Those actions feed the subterranean source of Dark Deleuze‘s darkness and the well from which comes hatred, cruelty, interruption, un-becoming, escape, cataclysm, and the destruction of worlds.

    Galloway: Does hatred for the world do a similar work for you that judgment or moralism does in other writers? How do we avoid the more violent and corrosive forms of hate?

    Culp: Writer Antonin Artaud’s attempt “to have done with the judgment of God” plays a crucial role in Dark Deleuze. Not just any specific authority but whatever gods are left. The easiest way to summarize this is “the three deaths.” Deleuze already makes note of these deaths in the preface to Difference and Repetition, but it only became clear to me after I read Gregg Flaxman’s Gilles Deleuze and the Fabulation of Philosophy. We all know of Nietzsche’s Death of God. With it, Nietzsche notes that God no longer serves as the central organizing principle for us moderns. Important to Dark Deleuze is Pierre Klossowski’s Nietzsche, who is part of a conspiracy against all of humanity. Why? Because even as God is dead, humanity has replaced him with itself. Next comes the Death of Man, which we can lay at the feet of Foucault. More than any other text, The Order of Things demonstrates how the birth of modern man was an invention doomed to fail. So if that death is already written in sand about to be washed away, then what comes next? Here I turn to the world, worlding, and world-building. It seems obvious when looking at the problems that plague our world: global climate change, integrated world capitalism, and other planet-scale catastrophes. We could try to deal with each problem one by one. But why not pose an even more radical proposition? What if we gave up on trying to save this world? We are already awash in sci-fi that tries to do this, though most of it is incredibly socially conservative. Perhaps now is the time for thinkers like us to catch up. Fragments of Deleuze already lay out the terms of the project. He ends the preface to Different and Repetition by assigning philosophy the task of writing apocalyptic science fiction. Deleuze’s book opens with lightning across the black sky and ends with the world swelling into a single ocean of excess. Dark Deleuze collects those moments and names it the Death of This World.

    Galloway: Speaking of climate change, I’m reminded how ecological thinkers can be very religious, if not in word then in deed. Ecologists like to critique “nature” and tout their anti-essentialist credentials, while at the same time promulgating tellurian “change” as necessary, even beneficial. Have they simply replaced one irresistible force with another? But your “hatred of the world” follows a different logic…

    Culp: Irresistible indeed! Yet it is very dangerous to let the earth have the final say. Not only does psychoanalysis teach us that it is necessary to buck the judgment of nature, the is/ought distinction at the philosophical core of most ethical thought refuses to let natural fact define the good. I introduce hatred to develop a critical distance from what is, and, as such, hatred is also a reclamation of the future in that it is a refusal to allow what-is to prevail over what-could-be. Such an orientation to the future is already in Deleuze and Guattari. What else is de-territorialization? I just give it a name. They have another name for what I call hatred: utopia.

    Speaking of utopia, Deleuze and Guattari’s definition of utopia in What is Philosophy? as simultaneously now-here and no-where is often used by commentators to justify odd compromise positions with the present state of affairs. The immediate reference is Samuel Butler’s 1872 book Erewhon, a backward spelling of nowhere, which Deleuze also references across his other work. I would imagine most people would assume it is a utopian novel in the vein of Edward Bellamy’s Looking Backward. And Erewhon does borrow from the conventions of utopian literature, but only to skewer them with satire. A closer examination reveals that the book is really a jab at religion, Victorian values, and the British colonization of New Zealand! So if there is anything that the now-here of Erewhon has to contribute to utopia, it is that the present deserves our ruthless criticism. So instead of being a simultaneous now-here and no-where, hatred follows from Deleuze and Guattari’s suggestion in A Thousand Plateaus to “overthrow ontology” (25). Therefore, utopia is only found in Erewhon by taking leave of the now-here to get to no-where.

    Galloway: In Dark Deleuze you talk about avoiding “the liberal trap of tolerance, compassion, and respect.” And you conclude by saying that the “greatest crime of joyousness is tolerance.” Can you explain what you mean, particularly for those who might value tolerance as a virtue?

    Culp: Among the many followers of Deleuze today, there are a number of liberal Deleuzians. Perhaps the biggest stronghold is in political science, where there is a committed group of self-professed radical liberals. Another strain bridges Deleuze with the liberalism of John Rawls. I was a bit shocked to discover both of these approaches, but I suppose it was inevitable given liberalism’s ability to assimilate nearly any form of thought.

    Herbert Marcuse recognized “repressive tolerance” as the incredible power of liberalism to justify the violence of positions clothed as neutral. The examples Marcuse cites are governments who say they respect democratic liberties because they allow political protest although they ignore protesters by labeling them a special interest group. For those of us who have seen university administrations calmly collect student demands, set up dead-end committees, and slap pictures of protestors on promotional materials as a badge of diversity, it should be no surprise that Marcuse dedicated the essay to his students. An important elaboration on repressive tolerance is Wendy Brown’s Regulating Aversion. She argues that imperialist US foreign policy drapes itself in tolerance discourse. This helps diagnose why liberal feminist groups lined up behind the US invasion of Afghanistan (the Taliban is patriarchal) and explains how a mere utterance of ISIS inspires even the most progressive liberals to support outrageous war budgets.

    Because of their commitment to democracy, Brown and Marcuse can only qualify liberalism’s universal procedures for an ethical subject. Each criticizes certain uses of tolerance but does not want to dispense with it completely. Deleuze’s hatred of democracy makes it much easier for me. Instead, I embrace the perspective of a communist partisan because communists fight from a different structural position than that of the capitalist.

    Galloway: Speaking of structure and position, you have a section in the book on asymmetry. Most authors avoid asymmetry, instead favoring concepts like exchange or reciprocity. I’m thinking of texts on “the encounter” or “the gift,” not to mention dialectics itself as a system of exchange. Still you want to embrace irreversibility, incommensurability, and formal inoperability–why?

    Culp: There are a lot of reasons to prefer asymmetry, but for me, it comes down to a question of political strategy.

    First, a little background. Deleuze and Guattari’s critique of exchange is important to Anti-Oedipus, which was staged through a challenge to Claude Lévi-Strauss. This is why they shift from the traditional Marxist analysis of mode of production to an anthropological study of anti-production, for which they use the work of Pierre Clastres and Georges Bataille to outline non-economic forms of power that prevented the emergence of capitalism. Contemporary anthropologists have renewed this line of inquiry, for instance, Eduardo Viveiros de Castro, who argues in Cannibal Metaphysics that cosmologies differ radically enough between peoples that they essentially live in different worlds. The cannibal, he shows, is not the subject of a mode of production but a mode of predation.

    Those are not the stakes that interest me the most. Consider instead the consequence of ethical systems built on the gift and political systems of incommensurability. The ethical approach is exemplified by Derrida, whose responsibility to the other draws from the liberal theological tradition of accepting the stranger. While there is distance between self and other, it is a difference that is bridged through the democratic project of radical inclusion, even if such incorporation can only be aporetically described as a necessary-impossibility. In contrast, the politics of asymmetry uses incommensurability to widen the chasm opened by difference. It offers a strategy for generating antagonism without the formal equivalence of dialectics and provides an image of revolution based on fundamental transformation. The former can be seen in the inherent difference between the perspective of labor and the perspective of capital, whereas the latter is a way out of what Guy Debord calls “a perpetual present.”

    Galloway: You are exploring a “dark” Deleuze, and I’m reminded how the concepts of darkness and blackness have expanded and interwoven in recent years in everything from afro-pessimism to black metal theory (which we know is frighteningly white). How do you differentiate between darkness and blackness? Or perhaps that’s not the point?

    Culp: The writing on Deleuze and race is uneven. A lot of it can be blamed on the imprecise definition of becoming. The most vulgar version of becoming is embodied by neoliberal subjects who undergo an always-incomplete process of coming more into being (finding themselves, identifying their capacities, commanding their abilities). The molecular version is a bit better in that it theorizes subjectivity as developing outside of or in tension with identity. Yet the prominent uses of becoming and race rarely escaped the postmodern orbit of hybridity, difference, and inclusive disjunction–the White Man’s face as master signifier, miscegenation as anti-racist practice, “I am all the names of history.” You are right to mention afro-pessimism, as it cuts a new way through the problem. As I’ve written elsewhere, Frantz Fanon describes being caught between “infinity and nothingness” in his famous chapter on the fact of blackness in Black Skin White Masks. The position of infinity is best championed by Fred Moten, whose black fugitive is the effect of an excessive vitality that has survived five hundred years of captivity. He catches fleeting moments of it in performances of jazz, art, and poetry. This position fits well with the familiar figures of Deleuzo-Guattarian politics: the itinerant nomad, the foreigner speaking in a minor tongue, the virtuoso trapped in-between lands. In short: the bastard combination of two or more distinct worlds. In contrast, afro-pessimism is not the opposite of the black radical tradition but its outside. According to afro-pessimism, the definition of blackness is nothing but the social death of captivity. Remember the scene of subjection mentioned by Fanon? During that nauseating moment he is assailed by a whole series of cultural associations attached to him by strangers on the street. “I was battered down by tom-toms, cannibalism, intellectual deficiency, fetishism, racial defects, slave-ships, and above all else, above all: ‘Sho’ good eatin”” (112). The lesson that afro-pessimism draws from this scene is that cultural representations of blackness only reflect back the interior of white civil society. The conclusion is that combining social death with a culture of resistance, such as the one embodied by Fanon’s mentor Aimé Césaire, is a trap that leads only back to whiteness. Afro-pessimism thus follows the alternate route of darkness. It casts a line to the outside through an un-becoming that dissolves the identity we are give as a token for the shame of being a survivor.

    Galloway: In a recent interview the filmmaker Haile Gerima spoke about whiteness as “realization.” By this he meant both realization as such–self-realization, the realization of the self, the ability to realize the self–but also the more nefarious version as “realization through the other.” What’s astounding is that one can replace “through” with almost any other preposition–for, against, with, without, etc.–and the dynamic still holds. Whiteness is the thing that turns everything else, including black bodies, into fodder for its own realization. Is this why you turn away from realization toward something like profanation? And is darkness just another kind of whiteness?

    Culp: Perhaps blackness is to the profane as darkness is to the outside. What is black metal if not a project of political-aesthetic profanation? But as other commentators have pointed out, the politics of black metal is ultimately telluric (e.g. Benjamin Noys’s “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal”). The left wing of black metal is anarchist anti-civ and the right is fascist-nativist. Both trace authority back to the earth that they treat as an ultimate judge usurped by false idols.

    The process follows what Badiou calls “the passion for the real,” his diagnosis of the Twentieth Century’s obsession with true identity, false copies, and inauthentic fakes. His critique equally applies to Deleuzian realists. This is why I think it is essential to return to Deleuze’s work on cinema and the powers of the false. One key example is Orson Welles’s F for Fake. Yet my favorite is the noir novel, which he praises in “The Philosophy of Crime Novels.” The noir protagonist never follows in the footsteps of Sherlock Holmes or other classical detectives’s search for the real, which happens by sniffing out the truth through a scientific attunement of the senses. Rather, the dirty streets lead the detective down enough dead ends that he proceeds by way of a series of errors. What noir reveals is that crime and the police have “nothing to do with a metaphysical or scientific search for truth” (82). The truth is rarely decisive in noir because breakthroughs only come by way of “the great trinity of falsehood”: informant-corruption-torture. The ultimate gift of noir is a new vision of the world whereby honest people are just dupes of the police because society is fueled by falsehood all the way down.

    To specify the descent to darkness, I use darkness to signify the outside. The outside has many names: the contingent, the void, the unexpected, the accidental, the crack-up, the catastrophe. The dominant affects associated with it are anticipation, foreboding, and terror. To give a few examples, H. P. Lovecraft’s scariest monsters are those so alien that characters cannot describe them with any clarity, Maurice Blanchot’s disaster is the Holocaust as well as any other event so terrible that it interrupts thinking, and Don DeLillo’s “airborne toxic event” is an incident so foreign that it can only be described in the most banal terms. Of Deleuze and Guattari’s many different bodies without organs, one of the conservative varieties comes from a Freudian model of the psyche as a shell meant to protect the ego from outside perturbations. We all have these protective barriers made up of habits that help us navigate an uncertain world–that is the purpose of Guattari’s ritornello, that little ditty we whistle to remind us of the familiar even when we travel to strange lands. There are two parts that work together, the refrain and the strange land. The refrains have only grown yet the journeys seem to have ended.

    I’ll end with an example close to my own heart. Deleuze and Guattari are being used to support new anarchist “pre-figurative politics,” which is defined as seeking to build a new society within the constraints of the now. The consequence is that the political horizon of the future gets collapsed into the present. This is frustrating for someone like me, who holds out hope for a revolutionary future that ceases the million tiny humiliations that make up everyday life. I like J. K. Gibson-Graham’s feminist critique of political economy, but community currencies, labor time banks, and worker’s coops are not my image of communism. This is why I have drawn on the gothic for inspiration. A revolution that emerges from the darkness holds the apocalyptic potential of ending the world as we know it.

    Works Cited

    • Ahmed, Sara. The Promise of Happiness. Durham, NC: Duke University Press, 2010.
    • Artaud, Antonin. To Have Done With The Judgment of God. 1947. Live play, Boston: Exploding Envelope, c1985. https://www.youtube.com/watch?v=VHtrY1UtwNs.
    • Badiou, Alain. The Century. 2005. Cambridge, UK: Polity Press, 2007.
    • Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter. Durham, NC: Duke University Press, 2007.
    • Bataille, Georges. “The Notion of Expenditure.” 1933. In Visions of Excess: Selected Writings, 1927-1939, translated by Allan Stoekl, Carl R. Lovin, and Donald M. Leslie Jr., 167-81. Minneapolis: University of Minnesota Press, 1985.
    • Bellamy, Edward. Looking Backward: From 2000 to 1887. Boston: Ticknor & co., 1888.
    • Blanchot, Maurice. The Writing of the Disaster. 1980. Translated by Ann Smock. Lincoln, NE: University of Nebraska Press, 1995.
    • Brown, Wendy. Regulating Aversion: Tolerance in the Age of Identity and Empire. Princeton, N.J.: Princeton University Press, 2006.
    • Burnett, Graham. “A Questionnaire on Materialisms.” October 155 (2016): 19-20.
    • Butler, Samuel. Erewhon: or, Over the Range. 1872. London: A.C. Fifield, 1910. http://www.gutenberg.org/files/1906/1906-h/1906-h.htm.
    • Chen, Mel Y. “A Questionnaire on Materialisms.” October 155 (2016): 21-22.
    • Clastres, Pierre. Society against the State. 1974. Translated by Robert Hurley and Abe Stein. New York: Zone Books, 1987.
    • Culp, Andrew. Dark Deleuze. Minneapolis: University of Minnesota Press, 2016.
    • ———. “Blackness.” New York: Hostis, 2015.
    • Debord, Guy. The Society of the Spectacle. 1967. Translated by Fredy Perlman et al. Detroit: Red and Black, 1977.
    • DeLanda, Manuel. A Thousand Years of Nonlinear History. New York: Zone Books, 2000.
    • ———. War in the Age of Intelligent Machines. New York: Zone Books, 1991.
    • DeLillo, Don. White Noise. New York: Viking Press, 1985.
    • Deleuze, Gilles. Cinema 2: The Time-Image. 1985. Translated by Hugh Tomlinson and Robert Galeta. Minneapolis: University of Minnesota Press, 1989.
    • ———. “The Philosophy of Crime Novels.” 1966. Translated by Michael Taormina. In Desert Islands and Other Texts, 1953-1974, 80-85. New York: Semiotext(e), 2004.
    • ———. Difference and Repetition. 1968. Translated by Paul Patton. New York: Columbia University Press, 1994.
    • ———. Empiricism and Subjectivity: An Essay on Hume’s Theory of Human Nature. 1953. Translated by Constantin V. Boundas. New York: Columbia University Press, 1995.
    • ———. Foucault. 1986. Translated by Seán Hand. Minneapolis: University of Minnesota Press, 1988.
    • Deleuze, Gilles, and Félix Guattari. Anti-Oedipus. 1972. Translated by Robert Hurley, Mark Seem, and Helen R. Lane. Minneapolis: University of Minnesota Press, 1977.
    • ———. A Thousand Plateaus. 1980. Translated by Brian Massumi. Minneapolis: University of Minnesota Press, 1987.
    • ———. What Is Philosophy? 1991. Translated by Hugh Tomlinson and Graham Burchell. New York: Columbia University Press, 1994.
    • Derrida, Jacques. The Gift of Death and Literature in Secret. Translated by David Willis. Chicago: University of Chicago Press, 2007; second edition.
    • Edelman, Lee. No Future: Queer Theory and the Death Drive. Durham, N.C.: Duke University Press, 2004.
    • Fanon, Frantz. Black Skin White Masks. 1952. Translated by Charles Lam Markmann. New York: Grove Press, 1968.
    • Flaxman, Gregory. Gilles Deleuze and the Fabulation of Philosophy. Minneapolis: University of Minnesota Press, 2011.
    • Foucault, Michel. The Archaeology of Knowledge and the Discourse on Language. 1971. Translated by A.M. Sheridan Smith. New York: Pantheon Books, 1972.
    • ———. “Nietzsche, Genealogy, History.” 1971. In Language, Counter-Memory, Practice: Selected Essays and Interviews, translated by Donald F. Bouchard and Sherry Simon, 113-38. Ithaca, N.Y.: Cornell University Press, 1977.
    • ———. The Order of Things. 1966. New York: Pantheon Books, 1970.
    • Freud, Sigmund. Beyond the Pleasure Principle. 1920. Translated by James Strachley. London: Hogarth Press, 1955.
    • ———. “Instincts and their Vicissitudes.” 1915. Translated by James Strachley. In Standard Edition of the Complete Psychological Works of Sigmund Freud 14, 111-140. London: Hogarth Press, 1957.
    • Gerima, Haile. “Love Visual: A Conversation with Haile Gerima.” Interview by Sarah Lewis and Dagmawi Woubshet. Aperture, Feb 23, 2016. http://aperture.org/blog/love-visual-haile-gerima/.
    • Gibson-Graham, J.K. The End of Capitalism (As We Knew It): A Feminist Critique of Political Economy. Hoboken: Blackwell, 1996.
    • ———. A Postcapitalist Politics. Minneapolis: University of Minnesota Press, 2006.
    • Guattari, Félix. “Machine and Structure.” 1968. Translated by Rosemary Sheed. In Molecular Revolution: Psychiatry and Politics, 111-119. Harmondsworth, Middlesex: Penguin, 1984.
    • Halperin, David, and Valerie Traub. “Beyond Gay Pride.” In Gay Shame, 3-40. Chicago: University of Chicago Press, 2009.
    • Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991.
    • Klossowski, Pierre. “Circulus Vitiosus.” Translated by Joseph Kuzma. The Agonist: A Nietzsche Circle Journal 2, no. 1 (2009): 31-47.
    • ———. Nietzsche and the Vicious Circle. 1969. Translated by Daniel W. Smith. Chicago: University of Chicago Press, 1997.
    • Lazzarato, Maurizio. Signs and Machines. 2010. Translated by Joshua David Jordan. Los Angeles: Semiotext(e), 2014.
    • Marcuse, Herbert. “Repressive Tolerance.” In A Critique of Pure Tolerance, 81-117. Boston: Beacon Press, 1965.
    • Mauss, Marcel. The Gift: The Form and Reason for Exchange in Archaic Societies. 1950. Translated by W. D. Hallis. New York: Routledge, 1990.
    • Moten, Fred. In The Break: The Aesthetics of the Black Radical Tradition. Minneapolis: University of Minnesota Press, 2003.
    • Mumford, Lewis. Technics and Human Development. San Diego: Harcourt Brace Jovanovich, 1967.
    • Noys, Benjamin. “‘Remain True to the Earth!’: Remarks on the Politics of Black Metal.” In: Hideous Gnosis: Black Metal Theory Symposium 1 (2010): 105-128.
    • Preciado, Paul. Testo-Junkie: Sex, Drugs, and Biopolitics in the Phamacopornographic Era. 2008. Translated by Bruce Benderson. New York: The Feminist Press, 2013.
    • Ruddick, Susan. “The Politics of Affect: Spinoza in the Work of Negri and Deleuze.” Theory, Culture, Society 27, no. 4 (2010): 21-45.
    • Scott, James C. The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia. New Haven: Yale University Press, 2009.
    • Sexton, Jared. “Afro-Pessimism: The Unclear Word.” In Rhizomes 29 (2016). http://www.rhizomes.net/issue29/sexton.html.
    • ———. “Ante-Anti-Blackness: Afterthoughts.” In Lateral 1 (2012). http://lateral.culturalstudiesassociation.org/issue1/content/sexton.html.
    • ———. “The Social Life of Social Death: On Afro-Pessimism and Black Optimism.” In Intensions 5 (2011). http://www.yorku.ca/intent/issue5/articles/jaredsexton.php.
    • Stiegler, Bernard. For a New Critique of Political Economy. Cambridge: Polity Press, 2010.
    • ———. Technics and Time 1: The Fault of Epimetheus. 1994. Translated by George Collins and Richard Beardsworth. Redwood City, CA: Stanford University Press, 1998.
    • Tiqqun. “How Is It to Be Done?” 2001. In Introduction to Civil War. 2001. Translated by Alexander R. Galloway and Jason E. Smith. Los Angeles, Calif.: Semiotext(e), 2010.
    • Toynbee, Arnold. A Study of History. Abridgement of Volumes I-VI by D.C. Somervell. London, Oxford University Press, 1946.
    • Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books, 2012.
    • Viveiros de Castro, Eduardo. Cannibal Metaphysics: For a Post-structural Anthropology. 2009. Translated by Peter Skafish. Minneapolis, Minn.: Univocal, 2014.
    • Villani, Arnaud. La guêpe et l’orchidée. Essai sur Gilles Deleuze. Paris: Éditions de Belin, 1999.
    • Welles, Orson, dir. F for Fake. 1974. New York: Criterion Collection, 2005.
    • Wiener, Norbert. Cybernetics: Cybernetics: Or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press, 1948; second revised edition.
    • Williams, Alex, and Nick Srincek. “#ACCELERATE MANIFESTO for an Accelerationist Politics.” Critical Legal Thinking. 2013. http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. He is a frequent contributor to The b2 Review “Digital Studies.”

    Andrew Culp is a Visiting Assistant Professor of Rhetoric Studies at Whitman College. He specializes in cultural-communicative theories of power, the politics of emerging media, and gendered responses to urbanization. His work has appeared in Radical Philosophy, Angelaki, Affinities, and other venues. He previously pre-reviewed Galloway’s Laruelle: Against the Digital for The b2 Review “Digital Studies.”

    Back to the essay

  • Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    By Michelle Moravec
    ~

    Author’s note: this is the written portion of a talk given at St. Joseph University’s Art + Feminism Wikipedia editathon, February 27, 2016. Thanks to Rachael Sullivan for the invite and  Rosalba Ugliuzza for Wikipedia data culling!

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth
    — Sarah Josepha Hale, Woman’s Record (1853)

    and others was a womanAs this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of references so that our stars can shine, since the problem of who precisely is “worthy of commemoration” or in Wikipedia language, who is deemed notable, so often seems to exclude women.

    As as Shannon Mattern asked at last year’s Art + Feminism Wikipedia edit-a-thon, “Could Wikipedia embody some alternative to the ‘Great Man Theory’ of how the world works?” Literary scholar Alison Booth, in How To Make It as a Woman, notes that the first book in praise of women by a woman appeared in 1404 (Christine de Pizan’s Book of the City of Ladies), launching a lengthy tradition of “exemplary biographical collections of women.” Booth identified more than 900 voluanonymous was toomes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular. Booth also points out, that lest we consign the genre to the realm of mere curiosity, predating the invention of “women’s history” the compilers, editrixes or authors of these volumes considered them a contribution to “national history” and indeed Booth concludes that the volumes were “indispensable aids in the formation of nationhood.”

    Booth compiled a list of the most frequently mentioned women in a subset of these books and tracked their frequency over time.  In an exemplary project, she made this data available on the web, allowing for the creation of the visualization below of American figures on that chart.

    booth data by date

    This chart makes clear what historians already know, notability is historically specific and contingent, something Wikipedia does not take into account in formulating guidelines that take this to be a stable concept.

    Only Pocahontas deviates from the great white woman school of history and she too becomes less salient over time.  Furthermore, by the standards of this era, at least as represented by these books, black women were largely considered un-notable. This perhaps explains why, in 1894, Gertrude Mossell published The Work of the Afro-American Woman, a compilation of achievements that she described as “historical in character.” Mossell’s volume itself is a rich source of information of women worthy of commemoration and commendation.

    Looking further into the twentieth-century, the successor to this sort of volume is aptly titled, Notable American Women, a three-volume set that while published in 1971 had its roots in the 1950s when Arthur Schlesinger, as head of Radcliffe’s College council, suggested that a biographical dictionary of women might be a useful thing. Perhaps predictably, a publisher could not be secured, so Radcliffe funded the project itself. The question then becomes does inclusion in a volume declaring women as “notable” mean that these women would meet Wikipedia’s “notability” standards?

    Studies have found varying degrees of bias in coverage of female figures compared to male figures. The latest numbers I found, as of January 2015, concluded that women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that prior to the 20th century, the problem was wildly exacerbated by “sourcing and notability issues.” Using the “missing” biographies concept borrowed from a 2010 study of Wikipedia’s “completeness,” I compared selected “classified” areas for biographies of Notable American Women (analysis was conducted by hand with tremendous assistance from Rosalba Ugliuzza).

    Working with the digitized copy of Notable American Women in Women and Social Movements, I began compiling a “missing” biographies quotient,  the percentage of entries missing for individuals by the “classified list of biographies” that appeared at the end of the third volume of Notable American Women. Mirroring the well-known category issues of Wikipedia, the editors finessed the difficulties of limiting individuals to one area by including them in multiple, including a section called “Negro Women” and another called “Indian Women”:

    missing for blog

    Initially I had suspected that larger classifications might have a greater percentage of missing entries, but that is not true. Social workers, the classification with the highest percentage of missing entries, is a relatively small classification with only nine individuals. The six classifications with no missing entries ranged in size from five to eleven.  I then created my own meta-categories to summarize what larger classifications might exacerbate this “missing” biographies problem.

    legend missing blog

    Inclusion in Notable American Women does not translate into inclusion in Wikipedia.   Influential individuals associated with female-dominated professions, social work and nursing, are less likely to be considered notable, as are those “leaders” in settlement houses or welfare work or “reformers” like peace advocates.   Perhaps due to edit-a-thons or Wikipedians-in-residence, female artists and female scientists have fared quite well.  Both Indian Women and Negro Women have the same percentage of missing women.

    Looking at the network of “Negro Women” by their Notable American Women classified entries, I noted their centrality. Frances Harper and Ida B. Wells are the most networked women in the volumes, which is representative of their position as bridge leaders (I also noted the centrality of Frances Gage, who does not have a Wikipedia entry yet, a fate she shares with the white abolitionists Sallie Holley and Caroline Putnam).

    negro network colors

    Visualizing further, I located two women who don’t have Wikipedia entries and are not included in Notable American Women:

    missing negro women

    Eva del Vakia Bowles was a long time YWCA worker who spent her life trying to improve interracial relations. She was the first black woman hired by the YWCA to head a branch. During WWI, Bowles had charge of Y’s established near war work factories to provide R & R for workers. Throughout her tenure at the Y, Bowles pressed the organization to promote black women to positions within the organization. In 1932 she resigned from her beloved Y in protest over policies she believed excluded black women from the decision making processes of the National Board.

    Addie D. Waites Hunton, also a Y worker and founding member of the NAACP, was an amazing woman who along with her friend Kathryn Magnolia Johnson authored Two Colored Women with the American Expeditionary Forces (1920), which details their time as Y workers in WWI where they were among the very first black women sent. Later, she became a field worker for the NAACP, a member of the WILPF, and was an observer in Haiti in 1926 as part of that group

    Finally, using a methodology I developed when working on the racially-biased History of Woman Suffrage, I scraped names from Mossell’s The Work of the Afro-American Woman to find women that should have appeared in Notable American Women and in Wikipedia. Although this is rough result of named extractions, it gave me a place to start.

    overlaps negro women

    Alice Dugged Cary does not appear in Notable American Women or Wikipedia.  She was born free in 1859 became president of the State Federation of Colored Women of Georgia, librarian of first branch for African Americans in Atlanta, established first free kindergartens for African American children in Georgia, nominated as honorary member in Zeta Phi Beta and was involved in its spread.

    Similarly, Lucy Ella Moten, born free in 1851, became principal of Miner Normal School, earned an M.D., and taught in the South during summer “vacations, appears in neither Notable American Women nor Wikipedia (or at least she didn’t until Mike Lyons started her page yesterday at the editathon!).

    _____

    Michelle Moravec (@ProfessMoravec) is Associate Professor of History at Rosemont College. She is a prominent digital historian and the digital history editor for Women and Social Movements. Her current project, The Politics of Women’s Culture, uses a combination of digital and traditional approaches to produce an intellectual history of the concept of women’s culture. She writes a monthly column for the Mid-Atlantic Regional Center for the Humanities, and maintains her own blog History in the City, at which an earlier version of this post first appeared.

    Back to the essay

  • Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    By Jürgen Geuter
    ~

    The relationship of government and governed has always been complicated. Questions of power, legitimacy, structural and institutional violence, of rights and rules and restrictions keep evading any ultimate solution, chaining societies to constant struggles about shifting balances between different positions and extremes or defining completely new aspects or perspectives on them to shake off the often perceived stalemate. Politics.

    Politics is a simple word but one with a lot of history. Coming from the ancient Greek term for “city” (as in city-state) the word pretty much shows what it is about: Establishing the structures that a community can thrive on. Policy is infrastructure. Not made of wire or asphalt but of ideas and ways of connecting them while giving the structure ways of enforcing the integrity of itself.

    But while the processes of negotiation and discourse that define politics will never stop while intelligent beings exist recent years have seen the emergence of technology as a replacement of politics. From Lawrence Lessig’s “Code is Law” to Marc Andreessen’s “Software Is Eating the World”: A small elite of people building the tools and technologies that we use to run our lives have in a way started emancipating from politics as an idea. Because where politics – especially in democratic societies – involves potentially more people than just a small elite, technologism and its high priests pull off a fascinating trick: defining policy and politics while claiming not to be political.

    This is useful for a bunch of reasons. It allows to effectively sidestep certain existing institutions and structures avoiding friction and loss of forward momentum. “Move fast and break things” was Facebook’s internal motto until only very recently. It also makes it easy to shed certain responsibilities that we expect political entities of power to fulfill. Claiming “not to be political” allows you to have mobs of people hunting others on your service without really having to do anything about it until it becomes a PR problem. Finally, evading the label of politics grants a lot more freedoms when it comes to wielding powers that the political structures have given you: It’s no coincidence that many Internet platform declare “free speech” a fundamental and absolute right, a necessary truth of the universe, unless it’s about showing a woman breastfeeding or talking about the abuse free speech extremists have thrown at feminists.

    Yesterday news about a very interesting case directly at the contact point of politics and technologism hit mainstream media: Apple refused – in a big and well-written open letter to its customers – to fulfill an order by the District Court of California to help the FBI unlock an iPhone 5c that belonged to one of the shooters in last year’s San Bernadino shooting, in which 14 people were killed and 22 more were injured.

    Apple’s argument is simple and ticks all the boxes of established technical truths about cryptography: Apple’s CEO Tim Cook points out that adding a back door to its iPhones would endanger all of Apple’s customers because nobody can make sure that such a back door would only be used by law enforcement. Some hacker could find that hole and use it to steal information such as pictures, credit card details or personal data from people’s iPhones or make these little pocket computers do illegal things. The dangers Apple correctly outlines are immense. The beautifully crafted letter ends with the following statements:

    Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government.

    We are challenging the FBI’s demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications.

    While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

    Nothing in that defense is new: The debate about government backdoors has been going on for decades with companies, software makers and government officials basically exchanging the same bullets points every few years. Government: “We need access. For security.” Software people: “Yeah but then nobody’s system is secure anymore.” Rinse and repeat. That whole debate hasn’t even changed through Edward Snowden’s leaks: While the positions were presented in an increasingly shriller and shriller tone the positions themselves stayed monolithic and unmoved. Two unmovable objects yelling at each other to get out of the way.

    Apple’s open letter was received with high praise all through the tech-savvy elites, from the cypherpunks to journalists and technologists. One tweet really stood out for me because it illustrates a lot of what we have so far talked about:

    Read that again. Tim Cook/Apple are clearly separated from politics and politicians when it comes to – and here’s the kicker – the political concept of individual liberty. A deeply political debate, the one about where the limits of individual liberty might be is ripped out of the realm of politicians (and us, but we’ll come to that later). Sing the praises of the new Guardian of the Digital Universe.

    But is the court order really exactly the fundamental danger for everybody’s individual liberty that Apple presents? The actual text paints a different picture. The court orders Apple to help the FBI access one specific, identified iPhone. The court order lists the actual serial number of the device. What “help” means in this context is also specified in great detail:

    1. Apple is supposed to disable features of the iPhone automatically deleting all user data stored on the device which are usually in place to prevent device thieves from accessing the data the owners of the device stored on it.
    2. Apple will also give the FBI some way to send passcodes (guesses of the PIN that was used to lock the phone) to the device. This sounds strange but will make sense later.
    3. Apple will disable all software features that introduce delays for entering more passcodes. You know the drill: You type the wrong passcode and the device just waits for a few seconds before you can try a new one.

    Apple is compelled to write a little piece of software that runs only on the specified iPhone (the text is very clear on that) and that disables the 2 security features explained in 1 and 3. Because the court actually recognizes the dangers of having that kind of software in the wild it explicitly allows Apple to do all of this within its own facilities: the Phone would be sent to an Apple facility, the software loaded to the RAM of the device. This is where 2 comes in: When the device has been modified by loading the Apple-signed software into its RAM the FBI needs a way to send PIN code guesses to the device. The court order even explicitly states that Apple’s new software package is only supposed to go to RAM and not change the device in other ways. Potentially dangerous software would never leave Apple’s premises, Apple also doesn’t have to introduce or weaken the security of all its devices and if Apple can fulfill the tasks described in some other way the court is totally fine with it. The government, any government doesn’t get a generic backdoor to all iPhones or all Apple products. In a more technical article than this on Dan Guido outlines that what the court order asks for would work on the iPhone in question but not on most newer ones.

    So while Apple’s PR evokes the threat of big government’s boots marching on to step on everybody’s individual freedoms, the text of the court order and the technical facts make the case ultra specific: Apple isn’t supposed to build a back door for iPhones but help law enforcement to open up one specific phone within their possession connected not to a theoretical crime in the future but the actual murder of 14 people.

    We could just attribute it all to Apple effectively taking a PR opportunity to strengthen the image it has been developing after realizing that they just couldn’t really do data and services, the image of the protector of privacy and liberty. An image that they kicked into overdrive post-Snowden. But that would be too simple because the questions here are a lot more fundamental.

    How do we – as globally networked individuals living in digitally connected and mutually overlaying societies – define the relationship of transnational corporations and the rules and laws we created?

    Cause here’s the fact: Apple was ordered by a democratically legitimate court to help in the investigation of a horrible, capital crime leading to the murder of 14 people by giving it a way to potentially access one specific phone of the more than 700 million phones Apple has made. And Apple refuses.

    Which – don’t get me wrong – is their right as an entity in the political system of the US: They can fight the court order using the law. They can also just refuse and see what the government, what law enforcement will do to make them comply. Sometimes the cost of breaking that kind of resistance overshadow the potential value so the request gets dropped. But where do we as individuals stand whose liberty is supposedly at stake? Where is our voice?

    One of the main functions of political systems is generating legitimacy for power. While some less-than-desirable systems might generate legitimacy by being the strongest, in modern times less physical legitimizations of power were established: a king for example often is supposed to rule because one or more god(s) say so. Which generates legitimacy especially if you share the same belief. In democracies legitimacy is generated by elections or votes: by giving people the right to speak their mind, elect representatives and be elected the power (and structural violence) that a government exerts is supposedly legitimized.

    Some people dispute the legitimacy of even democratically distributed power, and it’s not like they have no point, but let’s not dive into the teachings of Anarchism here. The more mainstream position is that there is a rule of law and that the institutions of the United States as a democracy are legitimized as the representation of US citizens. They represent every US citizen, they each are supposed to keep the political structure, the laws and rules and rights that come with being a US citizen (or living there) intact. And when that system speaks to a company it’s supposed to govern and the company just gives it the finger (but in a really nice letter) how does the public react? They celebrate.

    But what’s to celebrate? This is not some clandestine spy network gathering everybody’s every waking move to calculate who might commit a crime in 10 years and assassinate them. This is a concrete case, a request confirmed by a court in complete accordance with the existing practices in many other domains. If somebody runs around and kills people, the police can look into their mail, enter their home. That doesn’t abolish the protections of the integrity of your mail or home but it’s an attempt to balance the rights and liberties of the individual as well as the rights and needs of all others and the social system they form.

    Rights hardly ever are absolute, some might even argue that no right whatsoever is absolute: you have the right to move around freely. But I can still lock you out of my home and given certain crimes you might be locked up in prison. You have the right to express yourself but when you start threatening others, limits kick in. This balancing act that I also started this essay with has been going on publicly for ages and it will go on for a lot longer. Because the world changes. New needs might emerge, technology might create whole new domains of life that force us to rethink how we interact and which restrictions we apply. But that’s nothing that one company just decides.

    In unconditionally celebrating Cook’s letter a dangerous “apolitical” understanding of politics shows its ugly face: An ideology so obsessed with individual liberty that it happily embraces its new unelected overlords. Code is Law? More like “Cook is Law”.

    This isn’t saying that Apple (or any other company in that situation) just has to automatically do everything a government tells them to. It’s quite obvious that many of the big tech companies are not happy about the idea of establishing precedent in helping government authorities. Today it’s the FBI but what if some agency from some dictatorship wants the data from some dissident’s phone? Is a company just supposed to pick and choose?

    The world might not grow closer together but it gets connected a lot more and that leads to inconsistent laws, regulations, political ideologies etc colliding. And so far we as mankind have no idea how to deal with it. Facebook gets criticized in Europe for applying very puritanic standards when it comes to nudity but it does follow as a US company established US traditions. Should they apply German traditions which are a lot more open when it comes to depictions of nudity as well? What about rules of other countries? Does Facebook need to follow all? Some? If so which ones?

    While this creates tough problems for international law makers, governments and us more mortal people, it does concern companies very little as they can – when push comes to shove – just move their base of operation somewhere else. Which they already do to “optimize” avoid taxes, about which Cook also recently expressed indignant refusal to comply with US government requirements as “total political crap” – is this also a cause for all of us across the political spectrum to celebrate Apple’s protection of individual liberty? I wonder how the open letter would have looked if Ireland, which is a tax haven many technology companies love to use, would have asked for the same thing California did?

    This is not specifically about Apple. Or Facebook. Or Google. Or Volkswagen. Or Nestle. This is about all of them and all of us. If we uncritically accept that transnational corporations decide when and how to follow the rules we as societies established just because right now their (PR) interests and ours might superficially align how can we later criticize when the same companies don’t pay taxes or decide to not follow data protection laws? Especially as a kind of global digital society (albeit of a very small elite) we have between cat GIFs and shaking the fist at all the evil that governments do (and there’s lots of it) dropped the ball on forming reasonable and consistent models for how to integrate all our different inconsistent rules and laws. How we gain any sort of politically legitimized control over corporations, governments and other entities of power.

    Tim Cook’s letter starts with the following words:

    This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

    On that he and I completely agree.


    _____

    Jürgen Geuter (@tante) is a political computer scientist living in Germany. For about 10 years he has been speaking and writing about technology, digitalization, digital culture and the way these influence mainstream society. His writing has been featured in Der Spiegel, Wired Germany and other publications as well as his own blog Nodes in a Social Network, on which an earlier version of this post first appeared.

    Back to the essay

  • Coding Bootcamps and the New For-Profit Higher Ed

    Coding Bootcamps and the New For-Profit Higher Ed

    By Audrey Watters
    ~
    After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

    In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

    Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

    But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

    School as “Skills Training”

    In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

    I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

    But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

    There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

    Nor is the promotion of a more business-focused education that new either.

    Image credits

    Career Colleges: A History

    Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

    The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

    Image credits

    The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

    That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

    It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

    Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

    It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

    Image credits

    Promises, Promises

    Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

    That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

    The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.

    Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

    Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

    Image credits

    According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

    For-Profit Higher Ed: Who’s Being Served?

    The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

    The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

    According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

    That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

    Image credits

    The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

    Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

    Age
    Mean Age 30.95
    Gender
    Female 36.3%
    Male 63.1%
    Ethnicity
    American Indian 1.0%
    Asian American 14.0%
    Black 5.0%
    Other 17.2%
    White 62.8%
    Hispanic Origin
    Yes 20.3%
    No 79.7%
    Citizenship
    Yes, born in the US 78.2%
    Yes, naturalized 9.7%
    No 12.2%
    Education
    High school dropout 0.2%
    High school graduate 2.6%
    Some college 14.2%
    Associate’s degree 4.1%
    Bachelor’s degree 62.1%
    Master’s degree 14.2%
    Professional degree 1.5%
    Doctorate degree 1.1%

    (According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

    It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

    Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

    Deming, Goldin, and Katz argue that

    The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

    Image credits

    According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

    For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

    What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

    EQUIP and the New For-Profit Higher Ed

    On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

    The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

    By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

    Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

    Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

    Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

    And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

    Image credits

    The Forgotten Tech Ed: Community Colleges

    Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

    Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

    Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

    This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • The Social Construction of Acceleration

    The Social Construction of Acceleration

    Judy Wajcman, Pressed for Time (Chicago, 2014)a review of Judy Wajcman, Pressed for Time: The Acceleration of Life in Digital Capitalism (Chicago, 2014)
    by Zachary Loeb

    ~

    Patience seems anachronistic in an age of high speed downloads, same day deliveries, and on-demand assistants who can be summoned by tapping a button. Though some waiting may still occur the amount of time spent in anticipation seems to be constantly diminishing, and every day a new bevy of upgrades and devices promise that tomorrow things will be even faster. Such speed is comforting for those who feel that they do not have a moment to waste. Patience becomes a luxury for which we do not have time, even as the technologies that claimed they would free us wind up weighing us down.

    Yet it is far too simplistic to heap the blame for this situation on technology, as such. True, contemporary technologies may be prominent characters in the drama in which we are embroiled, but as Judy Wajcman argues in her book Pressed for Time, we should not approach technology as though it exists separately from the social, economic, and political factors that shape contemporary society. Indeed, to understand technology today it is necessary to recognize that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires” (3). In Wajcman’s view, technology is not the true culprit, nor is it an out-of-control menace. It is instead a convenient distraction from the real forces that make it seem as though there is never enough time.

    Wajcman sets a course that refuses to uncritically celebrate technology, whilst simultaneously disavowing the damning of modern machines. She prefers to draw upon “a social shaping approach to technology” (4) which emphasizes that the shape technology takes in a society is influenced by many factors. If current technologies leave us feeling exhausted, overwhelmed, and unsatisfied it is to our society we must look for causes and solutions – not to the machine.

    The vast array of Internet-connected devices give rise to a sense that everything is happening faster, that things are accelerating, and that compared to previous epochs things are changing faster. This is the kind of seemingly uncontroversial belief that Wajcman seeks to counter. While there is a present predilection for speed, the ideas of speed and acceleration remain murky, which may not be purely accidental when one considers “the extent to which the agenda for discussing the future of technology is set by the promoters of new technological products” (14). Rapid technological and societal shifts may herald the emergence of a “acceleration society” wherein speed increases even as individuals experience a decrease of available time. Though some would describe today’s world (at least in affluent nations) as being a synecdoche of the “acceleration society,” it would be a mistake to believe this to be a wholly new invention.

    Nevertheless the instantaneous potential of information technologies may seem to signal a break with the past – as the sort of “timeless time” which “emerged in financial markets…is spreading to every realm” (19). Some may revel in this speed even as others put out somber calls for a slow-down, but either approach risks being reductionist. Wajcman pushes back against the technological determinism lurking in the thoughts of those who revel and those who rebel, noting “that all technologies are inherently social in that they are designed, produced, used and governed by people” (27).

    Both today and yesterday “we live our lives surrounded by things, but we tend to think about only some of them as being technologies” (29). The impacts of given technologies depend upon the ways in which they are actually used, and Wajcman emphasizes that people often have a great deal of freedom in altering “the meanings and deployment of technologies” (33).

    Over time certain technologies recede into the background, but the history of technology is of a litany of devices that made profound impacts in determining experiences of time and speed. After all, the clock is itself a piece of technology, and thus we assess our very lack of time by looking to a device designed to measure its passage. The measurement of time was a technique used to standardize – and often exploit – labor, and the ability to carefully keep track of time gave rise to an ideology in which time came to be interchangeable with money. As a result speed came to be associated with profit even as slowness became associated with sloth. The speed of change became tied up in notions of improvement and progress, and thus “the speed of change becomes a self-evident good” (44). The speed promised by inventions are therefore seen as part of the march of progress, though a certain irony emerges as widespread speed leads to new forms of slowness – the mass diffusion of cars leading to traffic jams, And what was fast yesterday is often deemed slow today. As Wajcman shows, the experience of time compression that occurs tied to “our valorization of a busy lifestyle, as well as our profound ambivalence toward it” (58), has roots that go far back.

    Time takes on an odd quality – to have it is a luxury, even as constant busyness becomes a sign of status. A certain dissonance emerges wherein individuals feel that they have less time even as studies show that people are not necessarily working more hours. For Wajcman much of the explanation is related to “real increases in the combined work commitments of family members as it is about changes in the working time of individuals” with such “time poverty” being experienced particularly acutely “among working mothers, who juggle work, family, and leisure” (66). To understand time pressure it is essential to consider the degree to which people are free to use their time as they see fit.

    Societal pressures on the time of men and women differ, and though the hours spent doing paid labor may not have shifted dramatically, the hours parents (particularly mothers) spend performing unpaid labor remains high. Furthermore, “despite dramatic improvements in domestic technology, the amount of time spent on household tasks has not actually shown any corresponding dramatic decline” (68). Though household responsibilities can be shared equitably between partners, much of the onus still falls on women. As a busy event-filled life becomes a marker of status for adults so too may they attempt to bestow such busyness on the whole family, but busy parents needing to chaperone and supervise busy children only creates a further crunch on time. As Wajcman notes “perhaps we should be giving as much attention to the intensification of parenting as to the intensification of work” (82).

    Yet the story of domestic, unpaid and unrecognized, labor is a particularly strong example of a space wherein the promises of time-saving technological fixes have fallen short. Instead, “devices allegedly designed to save labor time fail to do so, and in some cases actually increase the time needed for the task” (111). The variety of technologies marketed for the household are often advertised as time savers, yet altering household work is not the same as eliminating it – even as certain tasks continually demand a significant investment of real time.

    Many of the technologies that have become mainstays of modern households – such as the microwave – were not originally marketed as such, and thus the household represents an important example of the way in which technologies “are both socially constructed and society shaping” (122). Of further significance is the way in which changing labor relations have also lead to shifts in the sphere of domestic work, wherein those who can afford it are able to buy themselves time through purchasing food from restaurants or by employing others for tasks such as child care and cleaning. Though the image of “the home of the future,” courtesy of the Internet of Things, may promise an automated abode, Wajcman highlights that those making and selling such technologies replicate society’s dominant blind spot for the true tasks of domestic labor. Indeed, the Internet of Things tends to “celebrate technology and its transformative power at the expense of home as a lived practice.” (130) Thus, domestic technologies present an important example of the way in which those designing and marketing technologies instill their own biases into the devices they build.

    Beyond the household, information communications technologies (ICTs) allow people to carry their office in their pocket as e-mails and messages ping them long after the official work day has ended. However, the idea “of the technologically tethered worker with no control over their own time…fails to convey the complex entanglement of contemporary work practices, working time, and the materiality of technical artifacts” (88). Thus, the problem is not that an individual can receive e-mail when they are off the clock, the problem is the employer’s expectation that this worker should be responding to work related e-mails while off the clock – the issue is not technological, it is societal. Furthermore, Wajcman argues, communications technologies permit workers to better judge whether or not something is particularly time sensitive. Though technology has often been used by employers to control employees, approaching communications technologies from an STS position “casts doubt on the determinist view that ICTs, per se, are driving the intensification of work” (107). Indeed some workers may turn to such devices to help manage this intensification.

    Technologies offer many more potentialities than those that are presented in advertisements. Though the ubiquity of communications devices may “mean that more and more of our social relationships are machine-mediated” (138), the focus should be as much on the word “social” as on the word “machine.” Much has been written about the way that individuals use modern technologies and the ways in which they can give rise to families wherein parents and children alike are permanently staring at a screen, but Wajcman argues that these technologies should “be regarded as another node in the flows of affect that create and bind intimacy” (150). It is not that these devices are truly stealing people’s time, but that they are changing the ways in which people spend the time they have – allowing harried individuals to create new forms of being together which “needs to be understood as adding a dimension to temporal experience” (158) which blurs boundaries between work and leisure.

    The notion that the pace of life has been accelerated by technological change is a belief that often goes unchallenged; however, Wajcman emphasizes that “major shifts in the nature of work, the composition of families, ideas about parenting, and patterns of consumption have all contributed to our sense that the world is moving faster than hitherto” (164). The experience of acceleration can be intoxicating, and the belief in a culture of improvement wrought by technological change may be a rare glimmer of positivity amidst gloomy news reports. However, “rapid technological change can actually be conservative, maintaining or solidifying existing social arrangements” (180). At moments when so much emphasis is placed upon the speed of technologically sired change the first step may not be to slow-down but to insist that people consider the ways in which these machines have been socially constructed, how they have shaped society – and if we fear that we are speeding towards a catastrophe than it becomes necessary to consider how they can be socially constructed to avoid such a collision.

    * * *

    It is common, amongst current books assessing the societal impacts of technology, for authors to present themselves as critical while simultaneously wanting to hold to an unshakable faith in technology. This often leaves such texts in an odd position: they want to advance a radical critique but their argument remains loyal to a conservative ideology. With Pressed for Time, Judy Wajcman, has demonstrated how to successfully achieve the balance between technological optimism and pessimism. It is a great feat, and Pressed for Time executes this task skillfully. When Wajcman writes, towards the end of the book, that she wants “to embrace the emancipatory potential of technoscience to create new meanings and new worlds while at the same time being its chief critic” (164) she is not writing of a goal but is affirming what she has achieved with Pressed for Time (a similar success can be attributed to Wajcman’s earlier books TechnoFeminism (Polity, 2004) and the essential Feminism Confronts Technology (Penn State, 1991).

    By holding to the framework of the social shaping of technology, Pressed for Time provides an investigation of time and speed that is grounded in a nuanced understanding of technology. It would have been easy for Wajcman to focus strictly on contemporary ICTs, but what her argument makes clear is that to do so would have been to ignore the facts that make contemporary technology understandable. A great success of Pressed for Time is the way in which Wajcman shows that the current sensation of being pressed for time is not a modern invention. Instead, the emphasis on speed as being a hallmark of progress and improvement is a belief that has been at work for decades. Wajcman avoids the stumbling block of technological determinism and carefully points out that falling for such beliefs leads to critiques being directed incorrectly. Written in a thoroughly engaging style, Pressed for Time is an academic book that can serve as an excellent introduction to the terminology and style of STS scholarship.

    Throughout Pressed for Time, Wajcman repeatedly notes the ways in which the meanings of technologies transcend what a device may have been narrowly intended to do. For Wajcman people’s agency is paramount as people have the ability to construct meaning for technology even as such devices wind up shaping society. Yet an area in which one could push back against Wajcman’s views would be to ask if communications technologies have shaped society to such an extent that it is becoming increasingly difficult to construct new meanings for them. Perhaps the “slow movement,” which Wajcman describes as unrealistic for “we cannot in fact choose between fast and slow, technology and nature” (176), is best perceived as a manifestation of the sense that much of technology’s “emancipatory potential” has gone awry – that some technologies offer little in the way of liberating potential. After all, the constantly connected individual may always feel rushed – but they may also feel as though they are under constant surveillance, that their every online move is carefully tracked, and that through the rise of wearable technology and the Internet of Things that all of their actions will soon be easily tracked. Wajcman makes an excellent and important point by noting that humans have always lived surrounded by technologies – but the technologies that surrounded an individual in 1952 were not sending every bit of minutiae to large corporations (and governments). Hanging in the background of the discussion of speed are also the questions of planned obsolescence and the mountains of toxic technological trash that wind up flowing from affluent nations to developing ones. The technological speed experienced in one country is the “slow violence” experienced in another. Though to make these critiques is to in no way to seriously diminish Wajcman’s argument, especially as many of these concerns simply speak to the economic and political forces that have shaped today’s technology.

    Pressed for Time is a Rosetta stone for decoding life in high speed, high tech societies. Wajcman deftly demonstrates that the problems facing technologically-addled individuals today are not as new as they appear, and that the solutions on offer are similarly not as wildly inventive as they may seem. Through analyzing studies and history, Wajcman shows the impacts of technologies, while making clear why it is still imperative to approach technology with a consideration of class and gender in mind. With Pressed for Time, Wajcman champions the position that the social shaping of technology framework still provides a robust way of understanding technology. As Wajcman makes clear the way technologies “are interpreted and used depends on the tapestry of social relations woven by age, gender, race, class, and other axes of inequality” (183).

    It is an extremely timely argument.
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay