boundary 2

Tag: techno-utopianism

  • Tamara Kneese — Our Silicon Valley, Ourselves

    Tamara Kneese — Our Silicon Valley, Ourselves

    a review of Anna Wiener, Uncanny Valley; Joanne McNeil, Lurking; Ellen Ullman, Life in Code; Wendy Liu, Abolish Silicon Valley; Ben Tarnoff and Moira Weigel, eds., Voices from the Valley; Mary Beth Meehan and Fred Turner, Seeing Silicon Valley

    by Tamara Kneese

    “Fuck all that. I have no theory. I’ve only got a story to tell.”
    – Elizabeth Freeman, “Without You, I’m Not Necessarily Nothing”

    ~

    Everyone’s eager to mine Silicon Valley for its hidden stories. In the past several years, women in or adjacent to the tech industry have published memoirs about their time there, ensconcing macrolevel critiques of Big Tech within intimate storytelling. Examples include Anna Wiener’s Uncanny Valley, Joanne McNeil’s Lurking, Ellen Ullman’s Life in Code, Susan Fowler’s Whistleblower, and Wendy Liu’s Abolish Silicon Valley, to name just a handful.[1] At the same time, recent edited volumes curate workers’ everyday lives in the ideological and geographical space that is Silicon Valley, seeking to expose the deep structural inequalities embedded in the tech industry and its reaches in the surrounding region. Examples of this trend include Ben Tarnoff and Moira Weigel’s Voices from the Valley and Mary Beth Meehan and Fred Turner’s Seeing Silicon Valley, along with tech journalists’ reporting on unfair labor practices and subsequent labor organizing efforts. In both cases, personal accounts of the tech industry’s effects constitute their own form of currency.

    What’s interesting about the juxtaposition of women’s first-hand accounts and collected worker interviews is how the first could fit within the much derided and feminized “personal essay” genre while the latter is more explicitly tied to the Marxist tradition of using workers’ perspectives as an organizing catalyst, i.e. through the process of empirical cataloging and self-reflection known as workers’ inquiry.[2] In this review essay, I consider these two seemingly unrelated trends in tandem. What role can personal stories play in sparking collective movements, and does presentation matter?

    *

    Memoirs of life with tech provide a glimpse of the ways that personal experiences—the good, the bad, and the ugly—are mediated by information technologies themselves as well as through their cascading effects on workplaces and social worlds. They provide an antidote to early cyberlibertarian screeds, imbued with dreams of escaping fleshly, earthly drudgery, like John Perry Barlow’s “A Declaration of the Independence of Cyberspace”: “Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion.” But in femme accounts of life in code, embodiment is inescapable. As much as the sterile efficiencies of automation would do away with the body’s messiness, the body rears its head with a vengeance. In a short post, one startup co-founder, Tracy Young, recounts attempting to neutralize her feminine coded body with plain clothes and a stoic demeanor, persevering through pregnancy, childbirth, and painful breastfeeding, and eventually hiding her miscarriage from her colleagues. Young reveals these details to point to the need for structural changes within the tech industry, which is still male-dominated, especially in the upper rungs. But for Young, capitalism is not the problem. Tech is redeemable through DEI initiatives that might better accommodate women’s bodies and needs. On the other end of the spectrum, pregnant Amazon warehouse workers suffer miscarriages when their managers refuse to follow doctors’ recommendations and compel pregnant workers to lift heavy boxes or prevent them from taking bathroom and water breaks. These experiences lie on disparate ends of the scale, but reflect the larger problems of patriarchy and racial capitalism in tech and beyond. It is unclear if this sliver of common ground can hope to bridge such a gulf of privilege.

    Sexual harassment, workplace misogyny, pregnancy discrimination: these grievances come up again and again within femme tech memoirs, even the ones that don’t at face value seem political. At first glance, Joanne McNeil’s Lurking: How a Person Became a User is not at all about labor. Her memoir is to some extent a celebration of the early internet, at times falling into the trap of nostalgia—the pleasure of the internet being “a place,” and the greater degree of flexibility and play afforded by usernames as opposed to real names policies. “Once I spoke freely and shared my dreams with strangers. Then the real world fastened itself to my digital life…My idle youth online largely—thankfully—evaporated in the sun, but more recent-ish old posts breeze along, colliding with and confusing new image of myself that I try to construct” (McNeil 2020, 8-9). Building on earlier feminist critiques of techno-utopian libertarianism, such as Paulina Borsook’s Cyberselfish (2000), in McNeil’s estimation, the early web allowed people to be lurkers, rather than users, even if the disembodied libertarian imaginaries attached to cyberspace never panned out. With coerced participation and the alignment of actual identities with online profiles, the shift to “the user” reflects the enclosure of the web and the growth of tech corporations, monetization, and ad tech. The beauty of being a lurker was the space to work out the self in relation to communities and to bear witness to these experimental relationships. As McNeil puts it, in her discussion of Friendster, “What happened between <form> and </form> was self-portraiture” (McNeil 2020, 90). McNeil references the many early internet communities, like Echo, LatinoLink, and Café los Negroes, which helped queer, Black, and Latinx relationships flourish in connection with locally situated subcultures.

    In a brief moment, while reflecting on the New York media world built around websites like Gawker, McNeil ties platformization to her experiences as a journalist, a producer of knowledge about the tech industry: “A few years ago, when I was a contractor at a traffic-driven online magazine, I complained to a technologist friend about the pressure I was under to deliver page view above a certain threshold” (McNeil 2020, 138). McNeil, who comes from a working class background, has had in adulthood the kind of work experiences Silicon Valley tends to make invisible, including call center work and work as a receptionist. As a journalist, even as a contractor, she was expected to amass thousands of Twitter followers. Because she lacked a large following, she relied on the publication itself to promote her work. She was eventually let go from the job. “My influence, or lack thereof, impacted my livelihood” (McNeil 2020, 139). This simply stated phrase reveals how McNeil’s critique of Big Tech is ultimately not only about users’ free labor and the extraction of profit from social relationships, but about how platform metrics are making people’s jobs worse.

    Labor practices emerge in McNeil’s narrative at several other points, in reference to Google’s internal caste system and the endemic problem of sexual harassment within the industry. In a discussion of Andrew Norman Wilson’s influential Workers Leaving the Googleplex video (2011), which made clear to viewers the sharp divisions within the Google workforce, McNeil notes that Google still needs these blue-collar workers, like janitors, security guards, and cafeteria staff, even if the company has rendered them largely invisible. But what is the purpose of making these so-called hidden laborers of tech visible, and for whom are they being rendered visible in the first place?[3] If you have ever been on a tech campus, you can’t miss ‘em. They’re right fucking there! If the hierarchies within tech are now more popularly acknowledged, then what? And are McNeil’s experiences as a white-collar tech journalist at all related to these other people’s stories, which often provide the scaffolding for tech reporters’ narratives?

    *

    Other tech memoirs more concretely focus on navigating tech workplaces from a femme perspective. Long-form attention to the matter creates more space for self-reflection and recognition on the part of the reader. In 2016, Anna Wiener’s n+1 essay, “Uncanny Valley,” went viral because it hit a nerve. Wiener presented an overtly gendered story—about body anxiety and tenuous friendship—told through one woman’s time in the world of startups before the majority of the public had caught wind of the downside of digital platforms and their stranglehold on life, work, and politics. Later, Wiener would write a monograph-length version of the story with the same title, detailing her experiences as a “non-technical” woman in tech: “I’d never been in a room with so few women, so much money, and so many people chomping at the bit to get a taste” (Wiener 2020, 61). In conversation with computer science academics and engineers, her skepticism about the feasibility of self-driving cars isn’t taken seriously because she is a woman who works in customer support. Wiener describes herself as being taken in by the promises and material culture of the industry: a certain cashmere sweater and overall look, wellness tinctures, EDM, and Burning Man at the same time she navigates taxicab gropings on work trips and inappropriate comments about “sensual” Jewish women at the office. Given the Protestant Work Ethic-tinged individualism of her workplace, she offers little in the way of solidarity. When her friend Noah is fired after writing a terse memo, she and the rest of the workers at the startup fail to stand up to their boss. She laments, “Maybe we never were a family. We knew we had never been a family,” questioning the common myth that corporations are like kin (Wiener 2020, 113). Near the end of her memoir, Wiener wrestles with the fact that GamerGate, and later the election of Trump, do not bring the reckoning she once thought was coming. The tech industry continues on as before.

    Wiener is in many respects reminiscent of another erudite, Jewish, New York City-to-San Francisco transplant, Ellen Ullman. Ullman published an account of her life as a woman programmer, Close to the Machine: Technophilia and Its Discontents, in 1997, amid the dotcom boom, when tech criticism was less fashionable. Ullman writes about “tantric, algorithmic” (1997, 49) sex with a fellow programmer and the erotics of coding itself, flirting with the romance novel genre. She critiques the sexism and user-disregard in tech (she is building a system for AIDS patients and their providers, but the programmers are rarely confronted with the fleshly existence of their end-users). Her background as a communist, along with her guilt about her awkward class position as an owner and landlord of a building in the Wall Street district, also comes through in the memoir: At one point, she quips “And who was Karl Marx but the original technophile?” (Ullman 1997, 29). Ullman presciently sees remote, contracted tech workers, including globally situated call center works, as canaries in the coal mine. As she puts it, “In this sense, we virtual workers are everyone’s future. We wander from job to job, and now it’s hard for anyone to stay put anymore. Our job commitments are contractual, contingent, impermanent, and this model of insecure life is spreading outward from us” (Ullman 1997, 146). Even for a privileged techie like Ullman, the supposedly hidden global underclass of tech was not so hidden after all.

    Ullman’s Life in Code: A Personal History of Technology, a collection of essays published twenty years later in 2017, reflects a growing desire to view the world of startups, major tech companies, and life in the Bay Area through the lens of women’s unique experiences. A 1998 essay included in Life in Code reveals Ullman’s distrust of what the internet might become: “I fear for the world the internet is creating. Before the advent of the Web, if you wanted to sustain a belief in far-fetched ideas, you had to go out into the desert, or live on a compound in the mountains, or move from one badly furnished room to another in a series of safe houses” (Ullman 2017, 89). Ullman at various points refers to the toxic dynamics of technoculture, the way that engineers make offhand sexist, racist remarks during their workplace interactions. In other words, critics like Ullman had been around for decades, but  her voice, and voices like hers, carried more weight in 2017 than in 1997. Following in Ullman’s footsteps, Wiener’s contribution came at just the right time.

    I appreciate Sharrona Pearl’s excellent review of Wiener’s Uncanny Valley in this publication, and her critique of the book’s political intentions (or lack thereof) and privileged perspective. When it comes to accounts of the self as political forces, Emma Goldman’s Living My Life it is not. But some larger questions remain: why did so many readers find Wiener’s personal narrative compelling, and how might we relate its popularity to a larger cultural shift in how stories about technology are told?

    Another woman’s memoir of a life in tech offers one possible answer. Wendy Liu started as a computer science major at a prestigious university, worked as a Google intern, and co-founded a startup, not an uncommon trajectory for a particular class of tech worker. Her candid memoir of her transformation from tech evangelist to socialist tech critic, Abolish Silicon Valley, references Wiener’s “Uncanny Valley” essay. Wiener’s account resonated with Liu, even as a software engineer who viewed herself as separate from the non-technical women around her— the marketers, program managers, and technical writers. Liu is open about the ways that ideologies around meritocracy and individual success color her trajectory: she viewed Gamergate as an opportunity to test out her company’s tech capabilities and idolized men like Elon Musk and Paul Graham. Hard work always pays off and working 80 hours a week is a means to an end. Sometimes you have to dance with the devil: for example, Liu’s startup at one point considers working for the Republican Party. Despite her seeming belief in the tech industry’s alignment with the social good, Liu has doubts. When Liu first encounters Wiener’s essay, she wryly notes that she thought n+1 might be a tech magazine, given its math-y name. Once she reads it, “The words cut like a knife through my gradually waning hopes, and I wanted to sink into an ocean of this writing” (Liu 2020, 111). Liu goes on to read hundreds of leftist books and undergo a political awakening in London. While Wiener’s memoir is intensely personal, not overtly about a collective politics, it still ignites something in Liu’s consciousness, becoming enfolded into her own account of her disillusionment with the tech industry and capitalism as a whole. Liu also refers to Tech Against Trump, published by Logic Magazine in 2017, which featured “stories from fellow tech workers who were startled into caring about politics because of Trump” (Liu 2020, 150). Liu was not alone in her awakening, and it was first-hand accounts by fellow tech workers who got her and many others to question their relationship to the system.

    Indeed, before Liu published her abolitionist memoir, she published a short essay for a UK-based Marxist publication, Notes from Below, titled “Silicon Inquiry,” applying the time-honored Marxist practice of workers’ inquiry to her own experiences as a white-collar coder. She writes, “I’ve lost my faith in the industry, and with it, any desire to remain within it. All the perks in the world can’t make up for what tech has become: morally destitute, mired in egotism and self-delusion, an aborted promise of what it could have been. Now that I realise this, I can’t go back.” She describes her trajectory from 12-year-old tinkerer, to computer science major, to Google intern, where she begins to sense that something is wrong and unfulfilling about her work: “In Marxist terms, I was alienated from my labour: forced to think about a problem I didn’t personally have a stake in, in a very typically corporate environment that drained all the motivation out of me.” When she turns away from Google to enter the world of startups, she is trapped by the ideology of faking it until you make it. They work long hours, technically for themselves, but without achieving anything tangible. Liu begins to notice the marginalized workers who comprise a major part of the tech industry, not only ride-hail drivers and delivery workers, but the cafeteria staff and janitors who work on tech campuses. The bifurcated workforce makes it difficult for workers to organize; the ones at the top are loyal to management, while those at the bottom of the hierarchy are afraid of losing their jobs if they speak out.

    Towards the end of her memoir, Liu describes joining a picket line of largely Chinese-American women who are cleaners for Marriott Hotels. This action is happening at the same time as the 2018 Google Walkout, during which white-collar tech workers organized against sexual harassment and subsequent retaliation at the company. Liu draws a connection between both kinds of workers, protesting in the same general place: “On the surface, you would think Google engineers and Marriott hotel cleaners couldn’t be more different. And yet, one key component of the hotel workers’ union dispute was the prevalence of sexual harassment in the workplace…The specifics might be different, but the same underlying problems existed at both companies” (Liu 2020, 158). She sees that TVCs (temps, vendors, and contractors) share grievances with their full-time counterparts, especially when it comes to issues over visas, sexual harassment, and entrenched racism. The trick for organizers is to inspire a sense of solidarity and connection among workers who, on the surface, have little in common. Liu explicitly connects the experiences of more white-collar tech workers like herself and marginalized workers within the tech industry and beyond. Her memoir is not merely a personal reflection, but a call to action–individual refusal, like deleting Facebook or Uber, is not sufficient, and transforming the tech industry is necessarily a collective endeavor. Her abolitionist memoir connects tech journalism’s use of workplace grievances and a first-hand account from the coder class, finding common ground in the hopes of sparking structural change. Memoirs like these may act as a kind of connective tissue, bridging disparate experiences of life in and through technology.

    *

    Another approach to personal accounts of tech takes a different tack: Rather than one long-form, first-hand account, cobble together many perspectives to get a sense of contrasts and potential spaces of overlap. Collections of workers’ perspectives have a long leftist history. For decades, anarchists, socialists, and other social reformers have gathered oral histories and published these personal accounts as part of a larger political project (see: Avrich 1995; Buhle and Kelley 1989; Kaplan and Shapiro 1998; Lynd and Lynd 1973). Two new edited collections focus on aggregated workers’ stories to highlight the diversity of people who live and work in Silicon Valley, from Iranian-American Google engineers to Mexican-American food truck owners. The concept of “Silicon Valley,” like “tech industry,” tends to obscure the lived experiences of ordinary individuals, reflecting more of a fantasy than a real place.

    Mary Beth Meehan and Fred Turner’s Seeing Silicon Valley follows the leftist photography tradition (think Lewis Hine or Dorothea Lange) of capturing working class people in their everyday struggles. Based on a six-week Airbnb stay in the area, Meehan’s images are arresting, spotlighting the disparity within Santa Clara Valley through a humanistic lens, while Turner’s historically-informed introduction and short essays provide a narrative through which to read the images. Silicon Valley is “a mirror of America itself. In that sense, it really is a city on a hill for our time” (Meehan and Turner 2021, 8). Through their presentation of life and work in Silicon Valley, Turner and Meehan push back against stereotypical, ahistorical visions of what Silicon Valley is. As Turner puts it, “The workers of Silicon Valley rarely look like the men idealized in its lore” (Meehan and Turner 2021, 7). Turner’s introduction critiques the rampant economic and racial inequality that exists in the Valley, and the United States as a whole, which bears out in the later vignettes. Unhoused people, some of whom work for major tech companies in Mountain View, live in vans despite having degrees from Stanford. People are living with the repercussions of superfund sites, hazardous jobs, and displacement. Several interviewees reference union campaigns, such as organizing around workplace injuries at the Tesla plant or contract security guards unionizing at Facebook, and their stories are accompanied by images of Silicon Valley Rising protest signs from an action in San Jose. Aside from an occasional direct quote, the narratives about the workers are truncated and editorialized. As the title would indicate, the book is above all a visual representation of life in Silicon Valley as a window into contemporary life in the US. Saturated colors and glossy pages make for a perfect coffee table object and one can imagine the images and text at home in a gallery space. To some degree, it is a stealth operation, and the book’s aesthetic qualities bely the sometimes difficult stories contained within, but the book’s intended audience is more academic than revolutionary. Who at this point doesn’t believe that there are poor people in “Silicon Valley,” or that “tech labor” obscures what is more often than not racialized, gendered, embodied, and precarious forms of work?

    A second volume takes a different approach, focusing instead on the stories of individual tech workers. Ben Tarnoff and Moira Weigel, co-founders of Logic Magazine, co-edited Voices from the Valley as part of their larger Logic brand’s partnership series with FSG Originals. The sharply packaged volume includes anonymous accounts from venture capitalist bros as well as from subcontracted massage workers, rendering visible the “people behind the platform” in a secretive industry full of NDAs (Tarnoff and Weigel 2020, 3). As the book’s title suggests, the interviews are edited back-and-forths with a wide range of workers within the industry, emphasizing their unique perspectives. The subtitle promises “Tech Workers Talk About What They Do—And How They Do It.” This is a clear nod to Studs Terkel’s 1974 epic collection of over one hundred workers’ stories, Working: People Talk About What They Do All Day and How They Feel About What They Do, in which he similarly categorizes them according to job description, from gravedigger to flight attendant. Terkel frames each interview and provides a description of their living conditions or other personal details, but for the most part, the workers speak on their own terms. In Tarnoff and Weigel’s contribution, we as readers hear from workers directly, although we do catch a glimpse of the interview prompts that drove the conversations. The editors also provide short essays introducing each “voice,” contextualizing their position. Workers’ voices are there, to be sure, but they are also trimmed to match Logic’s aesthetic. Reviews of the book, even in leftist magazines like Jacobin, tend to focus as much on the (admittedly formidable) husband and wife editor duo as they do on the stories of the workers themselves. Even so, Tarnoff and Weigel emphasize the political salience of their project in their introduction, arguing that “Silicon Valley is now everywhere” (2020, 7) as “tech is a layer of every industry” (2020, 8). They end their introduction with a call to the reader to “Speak, whoever you are. Your voice is in the Valley, too” (Tarnoff and Weigel 2020, 8).

    As in Meehan and Turner’s visually oriented book, Tarnoff and Weigel’s interviews point to the ways that badge color as class marker, along with gender, immigration status, disability, and race, affect people’s experiences on the job. Much like Meehan and Turner’s intervention, the book gives equal space to the most elite voices as it does to those on the margins, spanning the entire breadth of the tech industry. There are scattered examples of activism, like white collar organizing campaigns against Google’s Dragonfly and other #TechWontBuiltIt manifestations. At one point, the individual known as “The Cook” names Tech Workers Coalition. TWC volunteers were “computer techie hacker cool” and showed up to meetings or even union negotiations in solidarity with their subcontracted coworkers. The Cook notes that TWC thinks “everybody working for a tech company should be part of that company, in one sense or another” (Tarnoff and Weigel 2020, 68). There is an asterisk with a shorthand description of TWC, which has become something of a floating signifier of the tech workers’ movement. The international tech workers labor movement encompasses not only white collar coders, but gig and warehouse workers, who are absent here. With only seven interviews included, the volume cannot address every perspective. Because the interviews with workers are abbreviated and punctuated by punchy subheadings, it can be hard to tell whose voices are really being heard. Is it the workers of Silicon Valley, or is it the editors? As with Meehan and Turner’s effort, the end result is largely a view from above, not within. Which isn’t to say there isn’t a place for this kind of aggregation, or that it can’t connect to organizing efforts, but is this volume more of a political work than Wiener’s or Ullman’s memoirs?

    In other interviews, workers reveal gendered workplace discrimination and other grievances that might prompt collective action. The person identified as “The Technical Writer” describes being terminated from her job after her boss suspects her pregnancy. (He eliminates the position instead of directly firing her, making it harder for her to prove pregnancy discrimination). She decides not to pursue a lawsuit because, as she puts it, “Tech is actually kind of a small industry. You don’t want to be the woman who’s not easy to work with” (Tarnoff and Weigel 2020, 46). After being terminated, she finds work as a remote contractor, which allows her to earn an income while caring for her newborn and other young child. She describes the systemic misogyny in tech that leads to women in non-technical roles being seen as less valuable and maternity leave factoring into women’s lower salaries. But she laments the way that tech journalism tends to portray women as the objects, not the subjects of stories, turning them into victims and focusing narratives on bad actors like James Damore, who penned the infamous Google memo against diversity in tech. Sensationalized stories of harassment and discrimination are meant to tug at the heartstrings, but workers’ agency is often missing in these narratives. In another striking interview, “The Massage Therapist,” who is a subcontracted worker within a large tech campus environment, says that despite beleaguered cafeteria workers needing massages more than coders, she was prohibited from treating anyone who wasn’t a full-time employee. The young women working there seemed sad and too stressed to make time for their massages.

    These personal but minor insights are often missing from popular narratives or journalistic accounts and so their value is readily apparent. The question then becomes, how do both personal memoirs and these shorter, aggregated collections of stories translate into changing collective class consciousness? What happens after the hidden stories of Silicon Valley are revealed? Is an awareness of mutual fuckedness enough to form a coalition?[4]

    *

    A first step might be to recognize the political power of the personal essay or memoir, rather than discounting the genre as a whole. Critiques of the personal essay are certainly not new; Virginia Woolf herself decried the genre’s “unclothed egoism.” Writing for The New Yorker in 2017, Jia Tolentino marked the death of the personal essay. For a time, the personal essay was everywhere: sites like The Awl, Jezebel, The Hairpin, and The Toast centered women’s stories of body horror, sex, work, pain, adversity, and, sometimes, rape. In an instant, the personal essay was apparently over, just as white supremacy and misogyny seemed to be on the rise. With the rise of Trumpism and the related techlash, personal stories were replaced with more concretely political takes. Personal essays are despised largely because they are written by and for women. Tolentino traces some of the anti-personal essay discourse to Emily Gould’s big personal reveal in The New York Times Magazine, foregrounding her perspective as a woman on the internet in the age of Gawker. In 2020 essay in The Cut revisiting her Gawker shame and fame, Gould writes, “What the job did have, and what made me blind to everything it didn’t, was exposure. Every person who read the site knew my name, and in 2007, that was a lot of people. They emailed me and chatted with me and commented at me. Overnight, I had thousands of new friends and enemies, and at first that felt exhilarating, like being at a party all the time.” Gould describes her humiliation when a video of her fellating a plastic dildo at work goes viral on YouTube, likely uploaded by her boss, Nick Denton. After watching the infamous 2016 Presidential Debate, when Donald Trump creepily hovered behind Hillary Clinton, Gould’s body registers recognition, prompting a visit to her gynecologist, who tells her that her body is responding to past trauma:

    I once believed that the truth would set us free — specifically, that women’s first-person writing would “create more truth” around itself. This is what I believed when I published my first book, a memoir. And I must have still believed it when I began publishing other women’s books, too. I believed that I would become free from shame by normalizing what happened to me, by naming it and encouraging others to name it too. How, then, to explain why, at the exact same moment when first-person art by women is more culturally ascendant and embraced than it has ever been in my lifetime, the most rapacious, damaging forms of structural sexism are also on the rise?

    Gould has understandably lost her faith in women’s stories, no matter how much attention they receive, overturning structural sexism. But what if the personal essay is, in fact, a site of praxis? Wiener, McNeil, Liu, and Ullman’s contributions are, to various extents, political works because they highlight experiences that are so often missing from mainstream tech narratives. Their power derives from their long-form personal accounts, which touch not only on work but on relationships, family, personal histories. Just as much as the more overtly political edited volumes or oral histories, individual perspectives also align with the Marxist practice of workers’ inquiry. Liu’s memoir, in particular, brings this connection to light. What stories are seen as true workers’ inquiry, part of leftist praxis, and which are deemed too personal, or too femme, to be truly political? When it comes to gathering and publishing workers’ stories, who is doing the collecting and for what purpose? As theorists like Nancy Fraser (2013) caution, too often feminist storytelling under the guise of empowerment, even in cases like the Google Walkout, can be enfolded back into neoliberalism. For instance, the cries of “This is what Googley looks like!” heard during the protest reinforced the company’s hallmark metric of belonging even as it reinterpreted it.

    As Asad Haider and Salar Mohandesi note in their detailed history of workers’ inquiry for Viewpoint Magazine, Marx’s original vision for worker’s inquiry was never quite executed. His was a very empirical project, involving 101 questions about shop conditions, descriptions of fellow workers, and strikes or other organizing activities. Marx’s point was that organizers must look to the working class itself to change their own working conditions. Workers’ inquiry is a process of recognition, whereby reading someone else’s account of their grievances leads to a kind of mutual understanding. Over time and in different geographic contexts, from France and Italy to the United States, workers’ inquiry has entailed different approaches and end goals. Beyond the industrial factory worker, Black feminist socialists like Selma James gathered women’s experiences: “A Woman’s Place discussed the role of housework, the value of reproductive labor, and the organizations autonomously invented by women in the course of their struggle.” The politics of attribution were tricky, and there were often tensions between academic research and political action. James published her account under a pen name. At other times, multi-authored and co-edited works were portrayed as one person’s memoir. But the point was to take the singular experience and to have it extend outward into the collective. As Haider and Mohandesi put it,

    If, however, the objective is to build class consciousness, then the distortions of the narrative form are not problems at all. They might actually be quite necessary. With these narratives, the tension in Marx’s workers’ inquiry – between a research tool on the one hand, and a form of agitation on the other – is largely resolved by subordinating the former to the latter, transforming inquiry into a means to the end of consciousness-building.

    The personal has always been political. Few would argue that Audre Lorde’s deeply personal Cancer Journals is not also a political work. And Peter Kropotkin’s memoir accounting for his revolutionary life begins with his memory of his mother’s death. The consciousness raising and knowledge-sharing of 1970s feminist projects like Our Bodies, Ourselves, the queer liberation movement, disability activism, and the Black Power movement related individual experiences to broader social justice struggles. Oral histories accounting for the individual lives of ethnic minority leftists in the US, like Paul Avrich’s Anarchist Voices, Judy Kaplan and Linn Shapiro’s Red Diapers, and Michael Keith Honey’s Black Workers Remember, perform a similar kind of work. If Voices from the Valley and Seeing Silicon Valley are potentially valuable as political tools, then first person accounts of life in tech should be seen as another fist in the same fight. There is an undeniable power attached to hearing workers’ stories in their own words and movements can emerge from the unlikeliest sources.

    EDIT (8/6/2021): a sentence was added to correctly describe Joanne McNeil’s background and work history.
    _____

    Tamara Kneese is an Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco. Her first book on digital death care practices, Death Glitch, is forthcoming with Yale University Press. She is also the co-editor of The New Death (forthcoming Spring 2022, School for Advanced Research/University of New Mexico Press).

    Back to the essay

    _____

    Notes

    [1] I would include Kate Losse’s early, biting critique The Boy Kings, published in 2012, in this category. Losse was Facebook employee #51 and exposed the ways that nontechnical women, even those with PhDs, were marginalized by Zuckerberg and others in the company.

    [2] Workers’ inquiry combines research with organizing, constituting a process by which workers themselves produce knowledge about their own circumstances and use that knowledge as part of their labor organizing.

    [3] Noopur Raval (2021) questions the “invisibility” narratives within popular tech criticism, including Voices from the Valley and Seeing Silicon Valley, arguing that ghost laborers are not so ghostly to those living in the Global South.

    [4] With apologies to Fred Moton. See The Undercommons (2013).
    _____

    Works Cited

    • Paul Avrich. Anarchist Voices: An Oral History of Anarchism in the United States. Princeton, NJ: Princeton University Press, 1995.
    • Paulina Borsook. Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of High Tech. New York: Public Affairs, 2000.
    • Paul Buhle and Robin D. G. Kelley. “The Oral History of the Left in the United States: A Survey and Interpretation.” The Journal of American History 76, no. 2 (1989): 537-50. doi:10.2307/1907991.
    • Susan Fowler, Whistleblower: My Journey to Silicon Valley and Fight for Justice at Uber. New York: Penguin Books, 2020.
    • Nancy Fraser. Fortunes of Feminism: From State-Managed Capitalism to Neoliberal Crisis. New York: Verso, 2013.
    • Emma Goldman. Living My Life. New York: Alfred A. Knopf, 1931.
    • Emily Gould. “Exposed.” The New York Times Magazine, May 25, 2008, https://www.nytimes.com/2008/05/25/magazine/25internet-t.html.
    • Emily Gould. “Replaying My Shame.” The Cut, February 26, 2020. https://www.thecut.com/2020/02/emily-gould-gawker-shame.html
    • Asad Haider and Salar Mohandesi. “Workers’ Inquiry: A Genealogy.” Viewpoint Magazine, September 27, 2013, https://viewpointmag.com/2013/09/27/workers-inquiry-a-genealogy/.
    • Michael Keith Honey. Black Workers Remember: An Oral History of Segregation, Unionism, and the Freedom Struggle. Oakland: University of California Press, 2002.
    • Judy Kaplan and Linn Shapiro. Red Diapers: Growing Up in the Communist Left. Champaign, IL: University of Illinois Press, 1998.
    • Peter Kropotkin. Memoirs of a Revolutionist. Boston: Houghton Mifflin, 1899.
    • Wendy Liu. Abolish Silicon Valley: How to Liberate Technology from Capitalism. London: Repeater Books, 2020.
    • Wendy Liu. “Silicon Inquiry.” Notes From Below, January 29, 2018, https://notesfrombelow.org/article/silicon-inquiry.
    • Audre Lorde. The Cancer Journals. San Francisco: Aunt Lute Books, 1980.
    • Katherine Losse. The Boy Kings: A Journey Into the Heart of the Social Network. New York: Simon & Schuster, 2012.
    • Alice Lynd and Robert Staughton Lynd. Rank and File: Personal Histories by Working-Class Organizers. New York: Monthly Review Press, 1973.
      Joanne McNeil. Lurking: How a Person Became a User. New York: MCD/Farrar, Straus and Giroux, 2020.
    • Mary Beth Meehan and Fred Turner. Seeing Silicon Valley: Life Inside a Fraying America. Chicago: University of Chicago Press, 2021.
    • Fred Moten and Stefano Harney. The Undercommons: Fugitive Planning & Black Study. New York: Minor Compositions, 2013.
    • Noopur Raval. “Interrupting Invisbility in a Global World.” ACM Interactions. July/August, 2021, https://interactions.acm.org/archive/view/july-august-2021/interrupting-invisibility-in-a-global-world.
    • Ben Tarnoff and Moira Weigel. Voices from the Valley: Tech Workers Talk about What They Do—and How They Do It. New York: FSG Originals x Logic, 2020.
    • Studs Terkel. Working: People Talk About What They Do All Day and How They Feel About What They Do. New York: Pantheon Books, 1974.
    • Jia Tolentino. “The Personal-Essay Boom is Over.” The New Yorker, May 18, 2017, https://www.newyorker.com/culture/jia-tolentino/the-personal-essay-boom-is-over.
    • Ellen Ullman. Close to the Machine: Technophilia and Its Discontents.  New York: Picador/Farrar, Straus and Giroux, 1997.
    • Ellen Ullman. Life in Code: A Personal History of Technology. New York: MCD/Farrar, Straus and Giroux, 2017.
    • Anna Wiener. “Uncanny Valley.” n+1, Spring 2016: Slow Burn, https://nplusonemag.com/issue-25/on-the-fringe/uncanny-valley/.
    • Anna Wiener. Uncanny Valley: A Memoir. New York: MCD/Farrar, Straus and Giroux, 2020.
  • Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    Zachary Loeb — Burn It All (Review of Mullaney, Peters, Hicks and Philip, eds., Your Computer Is on Fire)

    a review of Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, eds., Your Computer Is on Fire (MIT Press, 2021)

    by Zachary Loeb

    ~

    It often feels as though contemporary discussions about computers have perfected the art of talking around, but not specifically about, computers. Almost every week there is a new story about Facebook’s malfeasance, but usually such stories say little about the actual technologies without which such conduct could not have happened. Stories proliferate about the unquenchable hunger for energy that cryptocurrency mining represents, but the computers eating up that power are usually deemed less interesting than the currency being mined. Debates continue about just how much AI can really accomplish and just how soon it will be able to accomplish even more, but the public conversation winds up conjuring images of gleaming terminators marching across a skull-strewn wasteland instead of rows of servers humming in an undisclosed location. From Zoom to dancing robots, from Amazon to the latest Apple Event, from misinformation campaigns to activist hashtags—we find ourselves constantly talking about computers, and yet seldom talking about computers.

    All of the aforementioned specifics are important to talk about. If anything, we need to be talking more about Facebook’s malfeasance, the energy consumption of cryptocurrencies, the hype versus the realities of AI, Zoom, dancing robots, Amazon, misinformation campaigns, and so forth. But we also need to go deeper. Case in point, though it was a very unpopular position to take for many years, it is now a fairly safe position to say that “Facebook is a problem;” however, it still remains a much less acceptable position to suggest that “computers are a problem.” At a moment in which it has become glaringly obvious that tech companies have politics, there still remains a common sentiment that computers are neutral. And thus such a view can comfortably disparage Bill Gates and Jeff Bezos and Sundar Pichai and Mark Zuckerberg for the ways in which they have warped the potential of computing, while still holding out hope that computing can be a wonderful emancipatory tool if it can just be put in better hands.

    But what if computers are themselves, at least part of, the problem? What if some of our present technological problems have their roots deep in the history of computing, and not just in the dorm room where Mark Zuckerberg first put together FaceSmash?

    These are the sorts of troubling and provocative questions with which the essential new book Your Computer Is on Fire engages. It is a volume that recognizes that when we talk about computers, we need to actually talk about computers. A vital intervention into contemporary discussions about technology, this book wastes no energy on carefully worded declarations of fealty to computers and the Internet, there’s a reason why the book is not titled Your Computer Might Be on Fire but Your Computer Is on Fire.

    The editors of the volume are quite upfront about the confrontational stance of the volume, Thomas Mullaney opens the book by declaring that “Humankind can no longer afford to be lulled into complacency by narratives of techno-utopianism or technoneutrality” (4). This is a point that Mullaney drives home as he notes that “the time for equivocation is over” before emphasizing that despite its at moments woebegone tonality, the volume is not “crafted as a call of despair but as a call to arms” (8). While the book sets out to offer a robust critique of computers, Mar Hicks highlights that the editors and contributors of the book shall do this in a historically grounded way, which includes a vital awareness that “there are almost always red flags and warning signs before a disaster, if one cares to look” (14). Though unfortunately many of those who attempted to sound the alarm about the potential hazards of computing were either ignored or derided as technophobes. Where Mullaney had described the book as “a call to arms,” Hicks describes what sorts of actions this call may entail: “we have to support workers, vote for regulation, and protest (or support those protesting) widespread harms like racist violence” (23). And though the focus is on collective action, Hicks does not diminish the significance of individual ethical acts, noting powerfully (in words that may be particularly pointed at those who work for the big tech companies): “Don’t spend your life as a conscientious cog in a terribly broken system” (24).

    Your Computer Is on Fire begins like a political manifesto; as the volume proceeds the contributors maintain the sense of righteous fury. In addition to introductions and conclusions, the book is divided into three sections: “Nothing is Virtual” wherein contributors cut through the airy talking points to bring ideas about computing back to the ground; “This is an Emergency” sounds the alarm on many of the currently unfolding crises in and around computing; and “Where Will the Fire Spread” turns a prescient gaze towards trajectories to be mindful of in the swiftly approaching future. Hicks notes, “to shape the future, look to the past” (24), and this is a prompt that the contributors take up with gusto as they carefully demonstrate how the outlines of our high-tech society were drawn long before Google became a verb.

    Drawing attention to the physicality of the Cloud, Nathan Ensmenger begins the “Nothing is Virtual” section by working to resituate “the history of computing within the history of industrialization” (35). Arguing that “The Cloud is a Factory,” Ensmenger digs beneath the seeming immateriality of the Cloud metaphor to extricate the human labor, human agendas, and environmental costs that get elided when “the Cloud” gets bandied about. The role of the human worker hiding behind the high-tech curtain is further investigated by Sarah Roberts, who explores how many of the high-tech solutions that purport to use AI to fix everything, are relying on the labor of human beings sitting in front of computers. As Roberts evocatively describes it, the “solutionist disposition toward AI everywhere is aspirational at its core” (66), and this desire for easy technological solutions covers up challenging social realities. While the Internet is often hailed as an American invention, Benjamin Peters discusses the US ARPANET alongside the ultimately unsuccessful network attempts of the Soviet OGAS and Chile’s Cybersyn, in order to show how “every network history begins with a history of the wider word” (81), and to demonstrate that networks have not developed by “circumventing power hierarchies” but by embedding themselves into those hierarchies (88). Breaking through the emancipatory hype surrounding the Internet, Kavita Philip explores the ways in which the Internet materially and ideologically reifies colonial logics, of dominance and control, demonstrating how “the infrastructural internet, and our cultural stories about it, are mutually constitutive.” (110). Mitali Thakor brings the volume’s first part to a close, with a consideration of how the digital age is “dominated by the feeling of paranoia” (120), by discussing the development and deployment of sophisticated surveillance technologies (in this case, for the detection of child pornography).

    “Electronic computing technology has long been an abstraction of political power into machine form” (137), these lines from Mar Hicks eloquently capture the leitmotif that plays throughout the chapters that make up the second part of the volume. Hicks’ comment comes from an exploration of the sexism that has long been “a feature, not a bug” (135) of the computing sector, with particular consideration of the ways in which sexist hiring and firing practices undermined the development of England’s computing sector. Further exploring how the sexism of today’s tech sector has roots in the development of the tech sector, Corinna Schlombs looks to the history of IBM to consider how that company suppressed efforts by workers to organize by framing the company as a family—albeit one wherein father still knew best. The biases built into voice recognition technologies (such as Siri) are delved into by Halcyon Lawrence who draws attention to the way that these technologies are biased towards those with accents, a reflection of the lack of diversity amongst those who design these technologies. In discussing robots, Safiya Umoja Noble explains how “Robots are the dreams of their designers, catering to the imaginaries we hold about who should do what in our societies” (202), and thus these robots reinscribe particular viewpoints and biases even as their creators claim they are creating robots for good. Shifting away from the flashiest gadgets of high-tech society, Andrea Stanton considers the cultural logics and biases embedded in word processing software that treat the demands of languages that are not written left to write as somehow aberrant. Considering how much of computer usage involves playing games, Noah Wardrip-Fruin argues that the limited set of video game logics keeps games from being about very much—a shooter is a shooter regardless of whether you are gunning down demons in hell or fanatics in a flooded ruin dense with metaphors.

    Oftentimes hiring more diverse candidates is hailed as the solution to the tech sector’s sexism and racism, but as Janet Abbate notes in the first chapter of the “Where Will the Fire Spread?” section, this approach generally attempts to force different groups to fit into Silicon Valley’s warped view of what attributes make for a good programmer. Abbate contends that equal representation will not be enough “until computer work is equally meaningful for groups who do not necessarily share the values and priorities that currently dominate Silicon Valley” (266). While computers do things to society, they also perform specific technical functions, and Ben Allen comments on source code to show the power that programmers have to insert nearly undetectable hacks into the systems they create. Returning to the question of code as empowerment, Sreela Sarkar discusses a skills training class held in Seelampur (near New Delhi), to show that “instead of equalizing disparities, IT-enabled globalization has created and further heightened divisions of class, caste, gender, religion, etc.” (308). Turning towards infrastructure, Paul Edwards considers how the speed with which platforms have developed to become infrastructure has been much swifter than the speed with which older infrastructural systems were developed, which he explores by highlighting three examples in various African contexts (FidoNet, M-Pesa, and Free Basiscs). And Thomas Mullaney closes out the third section with a consideration of the way that the QWERTY keyboard gave rise to pushback and creative solutions from those who sought to type in non-Latin scripts.

    Just as two of the editors began the book with a call to arms, so too the other two editors close the book with a similar rallying cry. In assessing the chapters that had come before, Kavita Philip emphasizes that the volume has chosen “complex, contradictory, contingent explanations over just-so stories.” (364) The contributors, and editors, have worked with great care to make it clear that the current state of computers was not inevitable—that things currently are the way they are does not mean they had to be that way, or that they cannot be changed. Eschewing simplistic solutions, Philip notes that language, history, and politics truly matter to our conversations about computing, and that as we seek for the way ahead we must be cognizant of all of them. In the book’s final piece, Benjamin Peters sets the computer fire against the backdrop of anthropogenic climate change and the COVID-19 pandemic, noting the odd juxtaposition between the progress narratives that surround technology and the ways in which “the world of human suffering has never so clearly appeared on the brink of ruin” (378). Pushing back against a simple desire to turn things off, Peters notes that “we cannot return the unasked for gifts of new media and computing” (380). Though the book has clearly been about computers, truly wrestling with the matters must force us to reflect on what it is that we really talk about when we talk about computers, and it turns out that “the question of life becomes how do not I but we live now?” (380)

    It is a challenging question, and it provides a fitting end to a book that challenges many of the dominant public narratives surrounding computers. And though the book has emphasized repeatedly how important it is to really talk about computers, this final question powers down the computer to force us to look at our own reflection in the mirrored surface of the computer screen.

    Yes, the book is about computers, but more than that it is about what it has meant to live with these devices—and what it might mean to live differently with them in the future.

    *

    With the creation of Your Computer Is on Fire the editors (Hicks, Mullaney, Peters, and Philip) have achieved an impressive feat. The volume is timely, provocative, wonderfully researched, filled with devastating insights, and composed in such a way as to make the contents accessible to a broad audience. It might seem a bit hyperbolic to suggest that anyone who has used a computer in the last week should read this book, but anyone who has used a computer in the last week should read this book. Scholars will benefit from the richly researched analysis, students will enjoy the forthright tone of the chapters, and anyone who uses computers will come away from the book with a clearer sense of the way in which these discussions matter for them and the world in which they live.

    For what this book accomplishes so spectacularly is to make it clear that when we think about computers and society it isn’t sufficient to just think about Facebook or facial recognition software or computer skills courses—we need to actually think about computers. We need to think about the history of computers, we need to think about the material aspects of computers, we need to think about the (oft-unseen) human labor that surrounds computers, we need to think about the language we use to discuss computers, and we need to think about the political values embedded in these machines and the political moments out of which these machines emerged. And yet, even as we shift our gaze to look at computers more critically, the contributors to Your Computer Is on Fire continually remind the reader that when we are thinking about computers we need to be thinking about deeper questions than just those about machines, we need to be considering what kind of technological world we want to live in. And moreover we need to be thinking about who is included and who is excluded when the word “we” is tossed about casually.

    Your Computer Is on Fire is simultaneously a book that will make you think, and a good book to think with. In other words, it is precisely the type of volume that is so desperately needed right now.

    The book derives much of its power from the willingness on the parts of the contributors to write in a declarative style. In this book criticisms are not carefully couched behind three layers of praise for Silicon Valley, and odes of affection for smartphones, rather the contributors stand firm in declaring that there are real problems (with historical roots) and that we are not going to be able to address them by pledging fealty to the companies that have so consistently shown a disregard for the broader world. This tone results in too many wonderful turns of phrase and incendiary remarks to be able to list all of them here, but the broad discussion around computers would be greatly enhanced with more comments like Janet Abbate’s “We have Black Girls Code, but we don’t have ‘White Boys Collaborate’ or ‘White Boys Learn Respect.’ Why not, if we want to nurture the full set of skills needed in computing?” (263) While critics of technology often find themselves having to argue from a defensive position, Your Computer Is on Fire is a book that almost gleefully goes on the offense.

    It almost seems like a disservice to the breadth of contributions to the volume to try to sum up its core message in a few lines, or to attempt to neatly capture the key takeaways in a few sentences. Nevertheless, insofar as the book has a clear undergirding position, beyond the titular idea, it is the one eloquently captured by Mar Hicks thusly:

    High technology is often a screen for propping up idealistic progress narratives while simultaneously torpedoing meaningful social reform with subtle and systemic sexism, classism, and racism…The computer revolution was not a revolution in any true sense: it left social and political hierarchies untouched, at times even strengthening them and heightening inequalities. (152)

    And this is the matter with which each contributor wrestles, as they break apart the “idealistic progress narratives” to reveal the ways that computers have time and again strengthened the already existing power structures…even if many people get to enjoy new shiny gadgets along the way.

    Your Computer Is on Fire is a jarring assessment of the current state of our computer dependent societies, and how they came to be the way they are; however, in considering this new book it is worth bearing in mind that it is not the first volume to try to capture the state of computers in a moment in time. That we find ourselves in the present position, is unfortunately a testament to decades of unheeded warnings.

    One of the objectives that is taken up throughout Your Computer Is on Fire is to counter the techno-utopian ideology that never so much dies as much as it shifts into the hands of some new would-be techno-savior wearing a crown of 1s and 0s. However, even as the mantle of techno-savior shifts from Mark Zuckerberg to Elon Musk, it seems that we may be in a moment when fewer people are willing to uncritically accept the idea that technological progress is synonymous with social progress. Though, if we are being frank, adoring faith in technology remains the dominant sentiment (at least in the US). Furthermore, this isn’t the first moment when a growing distrust and dissatisfaction with technological forces has risen, nor is this the first time that scholars have sought to speak out. Therefore, even as Your Computer is on Fire provides fantastic accounts of the history of computing, it is worthwhile to consider where this new vital volume fits within the history of critiques of computing. Or, to frame this slightly differently, in what ways is the 21st century critique of computing, different from the 20th century critique of computing?

    In 1979 the MIT Press published the edited volume The Computer Age: A Twenty Year View. Edited by Michael Dertouzos and Joel Moses, that book brought together a variety of influential figures from the early history of computing including J.C.R. Licklider, Herbert Simon, Marvin Minsky, and many others. The book was an overwhelmingly optimistic affair, and though the contributors anticipated that the mass uptake of computers would lead to some disruptions, they imagined that all of these changes would ultimately be for the best. Granted, the book was not without a critical voice. The computer scientist turned critic, Joseph Weizenbaum was afforded a chapter in a quarantined “Critiques” section from which to cast doubts on the utopian hopes that had filled the rest of the volume. And though Weizenbaum’s criticisms were presented, the book’s introduction politely scoffed at his woebegone outlook, and Weizenbaum’s chapter was followed by not one but two barbed responses, which ensured that his critical voice was not given the last word. Any attempt to assess The Computer Age at this point will likely say as much about the person doing the assessing as about the volume itself, and yet it would take a real commitment to only seeing the positive sides of computers to deny that the volume’s disparaged critic was one of its most prescient contributors.

    If The Computer Age can be seen as a reflection of the state of discourse surrounding computers in 1979, than Your Computer Is on Fire is a blazing demonstration of how greatly those discussions have changed by 2021. This is not to suggest that the techno-utopian mindset that so infused The Computer Age no longer exists. Alas, far from it.

    As the contributors to Your Computer Is on Fire make clear repeatedly, much of the present discussion around computing is dominated by hype and hopes. And a consideration of those conversations in the second half of the twentieth century reveals that hype and hope were dominant forces then as well. Granted, for much of that period (arguably until the mid-1980s and not really taking off until the 1990s), computers remained technologies with which most people had relatively little direct interaction. The mammoth machines of the 1960s and 1970s were not all top-secret (though some certainly were), but when social critics warned about computers in the 50s, 60s, and 70s they were not describing machines that had become ubiquitous—even if they warned that those machines would eventually become so. Thus, when Lewis Mumford warned in 1956, that:

    In creating the thinking machine, man has made the last step in submission to mechanization; and his final abdication before this product of his own ingenuity has given him a new object of worship: a cybernetic god. (Mumford, 173)

    It is somewhat understandable that his warning would be met with rolled eyes and impatient scoffs. For “the thinking machine” at that point remained isolated enough from most people’s daily lives that the idea that this was “a new object of worship” seemed almost absurd. Though he continued issuing dire predictions about computers, by 1970 when Mumford wrote of the development of “computer dominated society” this warning could still be dismissed as absurd hyperbole. And when Mumford’s friend, the aforementioned Joseph Weizenbaum, laid out a blistering critique of computers and the “artificial intelligentsia” in 1976 those warnings were still somewhat muddled as the computer remained largely out of sight and out of mind for large parts of society. Of course, these critics recognized that this “cybernetic god” had not as of yet become the new dominant faith, but they issued such warnings out of a sense that this was the direction in which things were developing.

    Already by the 1980s it was apparent to many scholars and critics that, despite the hype and revolutionary lingo, computers were primarily retrenching existing power relations while elevating the authority of a variety of new companies. And this gave rise to heated debates about how (and if) these technologies could be reclaimed and repurposed—Donna Haraway’s classic Cyborg Manifesto emerged out of those debates. By the time of 1990’s “Neo-Luddite Manifesto,” wherein Chellis Glendinning pointed to “computer technologies” as one of the types of technologies the Neo-Luddites were calling to be dismantled, the computer was becoming less and less an abstraction and more and more a feature of many people’s daily work lives. Though there is not space here to fully develop this argument, it may well be that the 1990s represent the decade in which many people found themselves suddenly in a “computer dominated society.”  Indeed, though Y2K is unfortunately often remembered as something of a hoax today, delving back into what was written about that crisis as it was unfolding makes it clear that in many sectors Y2K was the moment when people were forced to fully reckon with how quickly and how deeply they had become highly reliant on complex computerized systems. And, of course, much of what we know about the history of computing in those decades of the twentieth century we owe to the phenomenal research that has been done by many of the scholars who have contributed chapters to Your Computer Is on Fire.

    While Your Computer Is on Fire provides essential analyses of events from the twentieth century, as a critique it is very much a reflection of the twenty-first century. It is a volume that represents a moment in which critics are no longer warning “hey, watch out, or these computers might be on fire in the future” but in which critics can now confidently state “your computer is on fire.” In 1956 it could seem hyperbolic to suggest that computers would become “a new object of worship,” by 2021 such faith is on full display. In 1970 it was possible to warn of the threat of “computer dominated society,” by 2021 that “computer dominated society” has truly arrived. In the 1980s it could be argued that computers were reinforcing dominant power relations, in 2021 this is no longer a particularly controversial position. And perhaps most importantly, in 1990 it could still be suggested that computer technologies should be dismantled, but by 2021 the idea of dismantling these technologies that have become so interwoven in our daily lives seems dangerous, absurd, and unwanted. Your Computer Is on Fire is in many ways an acknowledgement that we are now living in the type of society about which many of the twentieth century’s technological critics warned. In the book’s final conclusion, Benjamin Peters pushes back against “Luddite self-righteousness” to note that “I can opt out of social networks; many others cannot” (377), and it is the emergence of this moment wherein the ability to “opt out” has itself become a privilege is precisely the sort of danger about which so many of the last century’s critics were so concerned.

    To look back at critiques of computers made throughout the twentieth century is in many ways a fairly depressing activity. For it reveals that many of those who were scorned as “doom mongers” had a fairly good sense of what computers would mean for the world. Certainly, some will continue to mock such figures for their humanism or borderline romanticism, but they were writing and living in a moment when the idea of living without a smartphone had not yet become unthinkable. As the contributors to this essential volume make clear, Your Computer Is on Fire, and yet too many of us still seem to believe that we are wearing asbestos gloves, and that if we suppress the flames of Facebook we will be able to safely warm our toes on our burning laptop.

    What Your Computer Is on Fire achieves so masterfully is to remind its readers that the wired up society in which they live was not inevitable, and what comes next is not inevitable either. And to remind them that if we are going to talk about what computers have wrought, we need to actually talk about computers. And yet the book is also a discomforting testament to a state of affairs wherein most of us simply do not have the option of swearing off computers. They fill our homes, they fill our societies, they fill our language, and they fill our imaginations. Thus, in dealing with this fire a first important step is to admit that there is a fire, and to stop absentmindedly pouring gasoline on everything. As Mar Hicks notes:

    Techno-optimist narratives surrounding high-technology and the public good—ones that assume technology is somehow inherently progressive—rely on historical fictions and blind spots that tend to overlook how large technological systems perpetuate structures of dominance and power already in place. (137)

    And as Kavita Philip describes:

    it is some combination of our addiction to the excitement of invention, with our enjoyment of individualized sophistications of a technological society, that has brought us to the brink of ruin even while illuminating our lives and enhancing the possibilities of collective agency. (365)

    Historically rich, provocatively written, engaging and engaged, Your Computer Is on Fire is a powerful reminder that when it is properly controlled fire can be useful, but when fire is allowed to rage out of control it turns everything it touches to ash. This book is not only a must read, but a must wrestle with, a must think with, and a must remember. After all, the “your” in the book’s title refers to you.

    Yes, you.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

    Works Cited

    • Lewis Mumford. The Transformations of Man. New York: Harper and Brothers, 1956.

     

     

     

     

     

  • Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    a review of Peter Frase, Four Futures: Life After Capitalism (Verso Jacobin Series, 2016)

    by Anthony Galluzzo

    ~

    Charlie Brooker’s acclaimed British techno-dystopian television series, Black Mirror, returned last year in a more American-friendly form. The third season, now broadcast on Netflix, opened with “Nosedive,” a satirical depiction of a recognizable near future when user-generated social media scores—on the model of Yelp reviews, Facebook likes, and Twitter retweets—determine life chances, including access to basic services, such as housing, credit, and jobs. The show follows striver Lacie Pound—played by Bryce Howard—who, in seeking to boost her solid 4.2 life score, ends up inadvertently wiping out all of her points, in the nosedive named by the episode’s title. Brooker offers his viewers a nightmare variation on a now familiar online reality, as Lacie rates every human interaction and is rated in turn, to disastrous result. And this nightmare is not so far from the case, as online reputational hierarchies increasingly determine access to precarious employment opportunities. We can see this process in today’s so-called sharing economy, in which user approval determines how many rides will go to the Uber driver, or if the room you are renting on Airbnb, in order to pay your own exorbitant rent, gets rented.

    Brooker grappled with similar themes during the show’s first season; for example, “Fifteen Million Merits” shows us a future world of human beings forced to spend their time on exercise bikes, presumably in order to generate power plus the “merits” that function as currency, even as they are forced to watch non-stop television, advertisements included. It is television—specifically a talent show—that offers an apparent escape to the episode’s protagonists. Brooker revisits these concerns—which combine anxieties regarding new media and ecological collapse in the context of a viciously unequal society—in the final episode of the new season, entitled “Hated in the Nation,” which features robotic bees, built for pollination in a world after colony collapse, that are hacked and turned to murderous use. Here is an apt metaphor for the virtual swarming that characterizes so much online interaction.

    Black Mirror corresponds to what literary critic Tom Moylan calls a “critical dystopia.” [1] Rather than a simple exercise in pessimism or anti-utopianism, Moylan argues that critical dystopias, like their utopian counterparts, also offer emancipatory political possibilities in exposing the limits of our social and political status quo, such as the naïve techno-optimism that is certainly one object of Brooker’s satirical anatomies. Brooker in this way does what Jacobin Magazine editor and social critic Peter Frase claims to do in his Four Futures: Life After Capitalism, a speculative exercise in “social science fiction” that uses utopian and dystopian science fiction as means to explore what might come after global capitalism. Ironically, Frase includes both online reputational hierarchies and robotic bees in his two utopian scenarios: one of the more dramatic, if perhaps inadvertent, ways that Frase collapses dystopian into utopian futures

    Frase echoes the opening lines of Marx and Engels’ Communist Manifesto as he describes the twin “specters of ecological catastrophe and automation” that haunt any possible post-capitalist future. While total automation threatens to make human workers obsolete, the global planetary crisis threatens life on earth, as we have known it for the past 12000 years or so. Frase contends that we are facing a “crisis of scarcity and a crisis of abundance at the same time,” making our moment one “full of promise and danger.” [2]

    The attentive reader can already see in this introductory framework the too-often unargued assumptions and easy dichotomies that characterize the book as a whole. For example, why is total automation plausible in the next 25 years, according to Frase, who largely supports this claim by drawing on the breathless pronouncements of a technophilic business press that has made similar promises for nearly a hundred years? And why does automation equal abundance—assuming the more egalitarian social order that Frase alternately calls “communism” or “socialism”—especially when we consider the  ecological crisis Frase invokes as one of his two specters? This crisis is very much bound to an energy-intensive technosphere that is already pushing against several of the planetary boundaries that make for a habitable planet; total automation would expand this same technosphere by several orders of magnitude, requiring that much more energy, materials, and  environmental sinks to absorb tomorrow’s life-sized iPhone or their corpses. Frase deliberately avoids these empirical questions—and the various debates among economists, environmental scientists and computer programmers about the feasibility of AI, the extent to which automation is actually displacing workers, and the ecological limits to technological growth, at least as technology is currently constituted—by offering his work as the “social science fiction” mentioned above, perhaps in the vein of Black Mirror. He distinguishes this method from futurism or prediction, as he writes, “science fiction is to futurism as social theory is to conspiracy theory.” [3]

    In one of his few direct citations, Frase invokes Marxist literary critic Fredric Jameson, who argues that conspiracy theory and its fictions are ideologically distorted attempts to map an elusive and opaque global capitalism: “Conspiracy, one is tempted to say, is the poor person’s cognitive mapping in the postmodern age; it is the degraded figure of the total logic of late capital, a desperate attempt to represent the latter’s system, whose failure is marked by its slippage into sheer theme and content.” [4] For Jameson, a more comprehensive cognitive map of our planetary capitalist civilization necessitates new forms of representation to better capture and perhaps undo our seemingly eternal and immovable status quo. In the words of McKenzie Wark, Jameson proposes nothing less than a “theoretical-aesthetic practice of correlating the field of culture with the field of political economy.” [5] And it is possibly with this “theoretical-aesthetic practice” in mind that Frase turns to science fiction as his preferred tool of social analysis.

    The book accordingly proceeds in the way of a grid organized around the coordinates “abundance/scarcity” and “egalitarianism/hierarchy”—in another echo of Jameson, namely his structuralist penchant for Greimas squares. Hence we get abundance with egalitarianism, or “communism,” followed by its dystopian counterpart, rentism, or hierarchical plenty in the first two futures; similarly, the final futures move from an equitable scarcity, or “socialism” to a hierarchical and apocalyptic “exterminism.” Each of these chapters begins with a science fiction, ranging from an ostensibly communist Star Trek to the exterminationist visions presented in Orson Scott Card’s Ender’s Game, upon which Frase builds his various future scenarios. These scenarios are more often than not commentaries on present day phenomena, such as 3D printers or the sharing economy, or advocacy for various measures, like a Universal Basic Income, which Frase presents as the key to achieving his desired communist future.

    With each of his futures anchored in a literary (or cinematic, or televisual) science fiction narrative, Frase’s speculations rely on imaginative literature, even as he avoids any explicit engagement with literary criticism and theory, such as the aforementioned work of  Jameson.  Jameson famously argues (see Jameson 1982, and the more elaborated later versions in texts such as Jameson 2005) that the utopian text, beginning with Thomas More’s Utopia, simultaneously offers a mystified version of dominant social relations and an imaginative space for rehearsing radically different forms of sociality. But this dialectic of ideology and utopia is absent from Frase’s analysis, where his select space operas are all good or all bad: either the Jetsons or Elysium.

    And, in a marked contrast with Jameson’s symptomatic readings, some science fiction is for Frase more equal than others when it comes to radical sociological speculation, as evinced by his contrasting views of George Lucas’s Star Wars and Gene Roddenberry’s Star Trek.  According to Frase, in “Star Wars, you don’t really care about the particularities of the galactic political economy,” while in Star Trek, “these details actually matter. Even though Star Trek and Star Wars might superficially look like similar tales of space travel and swashbuckling, they are fundamentally different types of fiction. The former exists only for its characters and its mythic narrative, while the latter wants to root its characters in a richly and logically structured social world.” [6]

    Frase here understates his investment in Star Trek, whose “structured social world” is later revealed as his ideal-type for a high tech fully automated luxury communism, while Star Wars is relegated to the role of the space fantasy foil. But surely the original Star Wars is at least an anticolonial allegory, in which a ragtag rebel alliance faces off against a technologically superior evil empire, that was intentionally inspired by the Vietnam War. Lucas turned to the space opera after he lost his bid to direct Apocalypse Now—which was originally based on Lucas’s own idea. According to one account of the franchise’s genesis, “the Vietnam War, which was an asymmetric conflict with a huge power unable to prevail against guerrilla fighters, instead became an influence on Star Wars. As Lucas later said, ‘A lot of my interest in Apocalypse Now carried over into Star Wars.” [7]

    Texts—literary, cinematic, and otherwise—often combine progressive and reactionary, utopian and ideological elements. Yet it is precisely the mixed character of speculative narrative that Frase ignores throughout his analysis, reducing each of his literary examples to unequivocally good or bad, utopian or dystopian, blueprints for “life after capitalism.” Why anchor radical social analysis in various science fictions while refusing basic interpretive argument? As with so much else in Four Futures, Frase uses assumption—asserting that Star Trek has one specific political valence or that total automation guided by advanced AI is an inevitability within 25 years—in the service of his preferred policy outcomes (and the nightmare scenarios that function as the only alternatives to those outcomes), while avoiding engagement with debates related to technology, ecology, labor, and the utopian imagination.

    Frase in this way evacuates the politically progressive and critical utopian dimensions from George Lucas’s franchise, elevating the escapist and reactionary dimensions that represent the ideological, as opposed to the utopian, pole of this fantasy. Frase similarly ignores the ideological elements of Roddenberry’s Star Trek: “The communistic quality of the Star Trek universe is often obscured because the films and TV shows are centered on the military hierarchy of Starfleet, which explores the galaxy and comes into conflict with alien races. But even this seems largely a voluntarily chosen hierarchy.” [8]

    Frase’s focus, regarding Star Trek, is almost entirely on the replicators  that can make something,  anything, from nothing, so that Captain Picard, from the eighties era series reboot, orders a “cup of Earl Grey, hot,” from one of these magical machines, and immediately receives Earl Grey, hot. Frase equates our present-day 3D printers with these same replicators over the course of all his four futures, despite the fact that unlike replicators, 3D printers require inputs: they do not make matter, but shape it.

    3D printing encompasses a variety of processes in which would-be makers create an image with a computer and CAD (computer aided design) software, which in turn provides a blueprint for the three-dimensional object to be “printed.” This requires either the addition of material—usually plastic—and the injection of that material into a mould.  The most basic type of 3D printing involves heating  “(plastic, glue-based) material that is then extruded through a nozzle. The nozzle is attached to an apparatus similar to a normal 2D ink-jet printer, just that it moves up and down, as well. The material is put on layer over layer. The technology is not substantially different from ink-jet printing, it only requires slightly more powerful computing electronics and a material with the right melting and extrusion qualities.” [9] This is still the most affordable and pervasive way to make objects with 3D printers—most often used to make small models and components. It is also the version of 3D printing that lends itself to celebratory narratives of post-industrial techno-artisanal home manufacture pushed by industry cheerleaders and enthusiasts alike. Yet, the more elaborate versions of 3D printing—“printing’ everything from complex machinery to  food to human organs—rely on the more complex and  expensive industrial versions of the technology that require lasers (e.g., stereolithography and selective laser sintering).  Frase espouses a particular left techno-utopian line that sees the end of mass production in 3D printing—especially with the free circulation of the programs for various products outside of our intellectual property regime; this is how he distinguishes his communist utopia from the dystopian rentism that most resembles our current moment,  with material abundance taken for granted. And it is this fantasy of material abundance and post-work/post-worker production that presumably appeals to Frase, who describes himself as an advocate of “enlightened Luddism.”

    This is an inadvertently ironic characterization, considering the extent to which these emancipatory claims conceal and distort the labor discipline imperative that is central to the shape and development of this technology, as Johan Söderberg argues, “we need to put enthusiastic claims for 3D printers into perspective. One claim is that laid-off American workers can find a new source of income by selling printed goods over the Internet, which will be an improvement, as degraded factory jobs are replaced with more creative employment opportunities. But factory jobs were not always monotonous. They were deliberately made so, in no small part through the introduction of the same technology that is expected to restore craftsmanship. ‘Makers’ should be seen as the historical result of the negation of the workers’ movement.” [10]

    Söderberg draws on the work of David Noble, who outlines how the numerical control technology central to the growth of post-war factory automation was developed specifically to de-skill and dis-empower workers during the Cold War period. Unlike Frase, both of these authors foreground those social relations, which include capital’s need to more thoroughly exploit and dominate labor, embedded in the architecture of complex megatechnical systems, from  factory automation to 3D printers. In collapsing 3D printers into Star Trek-style replicators, Frase avoids these questions as well as the more immediately salient issue of resource constraints that should occupy any prognostication that takes the environmental crisis seriously.

    The replicator is the key to Frase’s dream of endless abundance on the model of post-war US style consumer affluence and the end of all human labor. But, rather than a simple blueprint for utopia, Star Trek’s juxtaposition of techno-abundance with military hierarchy and a tacitly expansionist galactic empire—despite the show’s depiction of a Starfleet “prime directive” that forbids direct intervention into the affairs of the extraterrestrial civilizations encountered by the federation’s starships, the Enterprise’s crew, like its ostensibly benevolent US original, almost always intervenes—is significant. The original Star Trek is arguably a liberal iteration of Kennedy-era US exceptionalism, and reflects a moment in which relatively wide-spread first world abundance was underwritten by the deliberate underdevelopment, appropriation, and exploitation of various “alien races’” resources, land, and labor abroad. Abundance in fact comes from somewhere and some one.

    As historian H. Bruce Franklin argues, the original series reflects US Cold War liberalism, which combined Roddenberry’s progressive stances regarding racial inclusion within the parameters of the United States and its Starfleet doppelganger, with a tacitly anti-communist expansionist viewpoint, so that the show’s Klingon villains often serve as proxies for the Soviet menace. Franklin accordingly charts the show’s depictions of the Vietnam War, moving from a pro-war and pro-American stance to a mildly anti-war position in the wake of the Tet Offensive over the course of several episodes: “The first of these two episodes, ‘The City on the Edge of Forever‘ and ‘A Private Little War,’ had suggested that the Vietnam War was merely an unpleasant necessity on the way to the future dramatized by Star Trek. But the last two, ‘The Omega Glory‘ and ‘Let That Be Your Last Battlefield,’ broadcast in the period between March 1968 and January 1969, are so thoroughly infused with the desperation of the period that they openly call for a radical change of historic course, including an end to the Vietnam War and to the war at home.” [11]

    Perhaps Frase’s inattention to Jameson’s dialectic of ideology and utopia reflects a too-literal approach to these fantastical narratives, even as he proffers them as valid tools for radical political and social analysis. We could see in this inattention a bit too much of the fan-boy’s enthusiasm, which is also evinced by the rather narrow and backward-looking focus on post-war space operas to the exclusion of the self-consciously radical science fiction narratives of Ursula LeGuin, Samuel Delany, and Octavia Butler, among others. These writers use the tropes of speculative fiction to imagine profoundly different social relations that are the end-goal of all emancipatory movements. In place of emancipated social relations, Frase too often relies on technology and his readings must in turn be read with these limitations in mind.

    Unlike the best speculative fiction, utopian or dystopian, Frase’s “social science fiction” too often avoids the question of social relations—including the social relations embedded in the complex megatechnical systems Frase  takes for granted as neutral forces of production. He accordingly announces at the outset of his exercise: “I will make the strongest assumption possible: all need for human labor in the production process can be eliminated, and it is possible to live a life of pure leisure while machines do all the work.” [12] The science fiction trope effectively absolves Frase from engagement with the technological, ecological, or social feasibility of these predictions, even as he announces his ideological affinities with a certain version of post- and anti-work politics that breaks with orthodox Marxism and its socialist variants.

    Frase’s Jetsonian vision of the future resonates with various futurist currents that  can we now see across the political spectrum, from the Silicon Valley Singulitarianism of Ray Kurzweil or Elon Musk, on the right, to various neo-Promethean currents on the left, including so-called “left accelerationism.” Frase defends his assumption as a desire “to avoid long-standing debates about post-capitalist organization of the production process.” While such a strict delimitation is permissible for speculative fiction—an imaginative exercise regarding what is logically possible, including time travel or immortality—Frase specifically offers science fiction as a mode of social analysis, which presumably entails grappling with rather than avoiding current debates on labor, automation, and the production process.

    Ruth Levitas, in her 2013 book Utopia as Method: The Imaginary Reconstitution of Society, offers a more rigorous definition of social science fiction via her eponymous “utopia as method.”  This method combines sociological analysis and imaginative speculation, which Levitas defends as “holistic. Unlike political philosophy and political theory, which have been more open than sociology to normative approaches, this holism is expressed at the level of concrete social institutions and processes.” [13] But that attentiveness to concrete social institutions and practices combined with counterfactual speculation regarding another kind of human social world are exactly what is missing in Four Futures. Frase uses grand speculative assumptions-such as the inevitable rise of human-like AI or the complete disappearance of human labor, all within 25 years or so—in order to avoid significant debates that are ironically much more present in purely fictional works, such as the aforementioned Black Mirror or the novels of Kim Stanley Robinson, than in his own overtly non-fictional speculations. From the standpoint of radical literary criticism and radical social theory, Four Futures is wanting. It fails as analysis. And, if one primary purpose of utopian speculation, in its positive and negative forms, is to open an imaginative space in which wholly other forms of human social relations can be entertained, Frase’s speculative exercise also exhibits a revealing paucity of imagination.

    This is most evident in Frase’s most  explicitly utopian future, which he calls “communism,” without any mention of class struggle, the collective ownership of the means of production, or any of the other elements we usually associate with “communism”; instead, 3D printers-cum-replicators will produce whatever you need whenever you need it at home, an individualizing techno-solution to the problem of labor, production, and its organization that resembles alchemy in its indifference to material reality and the scarce material inputs required by 3D printers. Frase proffers a magical vision of technology so as to avoid grappling with the question of social relations; even more than this, in the coda to this chapter, Frase reveals the extent to which current patterns of social organization and stratification remain under Frase’s “communism.” Frase begins this coda with a question: “in a communist society, what do we do all day?”  To which he responds: “The kind of communism   I’ve described is sometimes mistakenly construed, by both its critics and its adherents,  as a society in which hierarchy and conflict are wholly absent. But rather than see the abolition of the capital-wage relation as a single shot solution to all possible social problems, it is perhaps better to think of it in the terms used by political scientist, Corey Robin, as a way to ‘convert hysterical misery into ordinary unhappiness.’” [14]

    Frase goes on to argue—rightly—that the abolition of class society or wage labor will not put an end to a variety of other oppressions, such as those based in gender and racial stratification; he in this way departs from the class reductionist tendencies sometimes on view in the magazine he edits.  His invocation of Corey Robin is nonetheless odd considering the Promethean tenor of Frase’s preferred futures. Robin contends that while the end of exploitation, and capitalist social relations, would remove the major obstacle to  human flourishing, human beings will remain finite and fragile creatures in a finite and fragile world. Robin in this way overlaps with Fredric Jameson’s remarkable essay on Soviet writer Andre Platonov’s Chevengur, in which Jameson writes: “Utopia is merely the political and social solution of collective life: it does not do away with the tensions and inherent contradictions  inherent in both interpersonal relations and in bodily existence itself (among them, those of sexuality), but rather exacerbates those and allows them free rein, by removing the artificial miseries of money and self-preservation [since] it is not the function of Utopia to bring the dead back to life nor abolish death in the first place.” [15] Both Jameson and Robin recall Frankfurt School thinker Herbert Marcuse’s distinction between necessary and surplus repression: while the latter encompasses all of the unnecessary miseries attendant upon a class stratified form of social organization that runs on exploitation, the former represents the necessary adjustments we make to socio-material reality and its limits.

    It is telling that while Star Trek-style replicators fall within the purview of the possible for Frase, hierarchy, like death, will always be with us, since he at least initially argues that status hierarchies will persist after the “organizing force of the capital relation has been removed” (59). Frase oscillates between describing these status hierarchies as an unavoidable, if unpleasant, necessity and a desirable counter to the uniformity of an egalitarian society. Frase illustrates this point in recalling Cory Doctorow’s Down and Out in The Magic Kingdom, a dystopian novel that depicts a world where all people’s needs are met at the same time that everyone competes for reputational “points”—called Whuffie—on the model of Facebook “likes” and Twitter retweets. Frase’s communism here resembles the world of Black Mirror described above.  Although Frase shifts from the rhetoric of necessity to qualified praise in an extended discussion of Dogecoin, an alternative currency used to tip or “transfer a small number of to another Internet user in appreciation of their witty and helpful contributions” (60). Yet Dogecoin, among all cryptocurrencies, is mostly a joke, and like many cryptocurrencies is one whose “decentralized” nature scammers have used to their own advantage, most famously in 2015. In the words of one former enthusiast: “Unfortunately, the whole ordeal really deflated my enthusiasm for cryptocurrencies. I experimented, I got burned, and I’m moving on to less gimmicky enterprises.” [16]

    But how is this dystopian scenario either necessary or desirable?  Frase contends that “the communist society I’ve sketched here, though imperfect, is at least one in which conflict is no longer based in the opposition between wage workers and capitalists or on struggles…over scarce resources” (67). His account of how capitalism might be overthrown—through a guaranteed universal income—is insufficient, while resource scarcity and its relationship to techno-abundance remains unaddressed in a book that purports to take the environmental crisis seriously. What is of more immediate interest in the case of this coda to his most explicitly utopian future is Frase’s non-recognition of how internet status hierarchies and alternative currencies are modeled on and work in tandem with capitalist logics of entrepreneurial selfhood. We might consider Pierre Bourdieu’s theory of social and cultural capital in this regard, or how these digital platforms and their ever-shifting reputational hierarchies are the foundation of what Jodi Dean calls “communicative capitalism.” [17]

    Yet Frase concludes his chapter by telling his readers that it would be a “misnomer” to call his communist future an “egalitarian configuration.” Perhaps Frase offers his fully automated Facebook utopia as counterpoint to the Cold War era critique of utopianism in general and communism in particular: it leads to grey uniformity and universal mediocrity. This response—a variation on Frase’s earlier discussion of Star Trek’s “voluntary hierarchy”—accepts the premise of the Cold War anti-utopian criticisms, i.e., how the human differences that make life interesting, and generate new possibilities, require hierarchy of some kind. In other words, this exercise in utopian speculation cannot move outside the horizon of our own present day ideological common sense.

    We can again see this tendency at the very start of the book. Is total automation an unambiguous utopia or a reflection of Frase’s own unexamined ideological proclivities, on view throughout the various futures, for high tech solutions to complex socio-ecological problems? For various flavors of deus ex machina—from 3D printers to replicators to robotic bees—in place of social actors changing the material realities that constrain them through collective action? Conversely, are the “crisis of scarcity” and the visions of ecological apocalypse Frase evokes intermittently throughout his book purely dystopian or ideological? Surely, since Thomas Malthus’s 1798 Essay on Population, apologists for various ruling orders have used the threat of scarcity and material limits to justify inequity, exploitation, and class division: poverty is “natural.” Yet, can’t we also discern in contemporary visions of apocalypse a radical desire to break with a stagnant capitalist status quo? And in the case of the environmental state of emergency, don’t we have a rallying point for constructing a very different eco-socialist order?

    Frase is a founding editor of Jacobin magazine and a long-time member of the Democratic Socialists of America. He nonetheless distinguishes himself from the reformist and electoral currents at those organizations, in addition to much of what passes for orthodox Marxism. Rather than full employment—for example—Frase calls for the abolition of work and the working class in a way that echoes more radical anti-work and post-workerist modes of communist theory. So, in a recent editorial published by Jacobin, entitled “What It Means to Be on the Left,” Frase differentiates himself from many of his DSA comrades in declaring that “The socialist project, for me, is about something more than just immediate demands for more jobs, or higher wages, or universal social programs, or shorter hours. It’s about those things. But it’s also about transcending, and abolishing, much of what we think defines our identities and our way of life.” Frase goes on to sketch an emphatically utopian communist horizon that includes the abolition of class, race, and gender as such. These are laudable positions, especially when we consider a new new left milieu some of whose most visible representatives dismiss race and gender concerns as “identity politics,” while redefining radical class politics as a better deal for some amorphous US working class within an apparently perennial capitalist status quo.

    Frase’s utopianism in this way represents an important counterpoint within this emergent left. Yet his book-length speculative exercise—policy proposals cloaked as possible scenarios—reveals his own enduring investments in the simple “forces vs. relations of production” dichotomy that underwrote so much of twentieth century state socialism with its disastrous ecological record and human cost.  And this simple faith in the emancipatory potential of capitalist technology—given the right political circumstances despite the complete absence of what creating those circumstances might entail— frequently resembles a social democratic version of the Californian ideology or the kind of Silicon Valley conventional wisdom pushed by Elon Musk. This is a more efficient, egalitarian, and techno-utopian version of US capitalism. Frase mines various left communist currents, from post-operaismo to communization, only to evacuate these currents of their radical charge in marrying them to technocratic and technophilic reformism, hence UBI plus “replicators” will spontaneously lead to full communism. Four Futures is in this way an important, because symptomatic, expression of what Jason Smith (2017) calls “social democratic accelerationism,” animated by a strange faith in magical machines in addition to a disturbing animus toward ecology, non-human life, and the natural world in general.

    _____

    Anthony Galluzzo earned his PhD in English Literature at UCLA. He specializes in radical transatlantic English language literary cultures of the late eighteenth- and nineteenth centuries. He has taught at the United States Military Academy at West Point, Colby College, and NYU.

    Back to the essay

    _____

    Notes

    [1] See Tom Moylan, Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia (Boulder: Westview Press, 2000).

    [2] Peter Frase, Four Futures: Life After Capitalism. (London: Verso Books, 2016),
    3.

    [3] Ibid, 27.

    [4] Fredric Jameson,  “Cognitive Mapping.” In C. Nelson and L. Grossberg, eds. Marxism and the Interpretation of Culture (Illinois: University of Illinois Press, 1990), 6.

    [5] McKenzie Wark, “Cognitive Mapping,” Public Seminar (May 2015).

    [6] Frase, 24.

    [7] This space fantasy also exhibits the escapist, mythopoetic, and even reactionary elements Frase notes—for example, its hereditary caste of Jedi fighters and their ancient religion—as Benjamin Hufbauer notes, “in many ways, the political meanings in Star Wars were and are progressive, but in other ways the film can be described as middle-of-the-road, or even conservative. Hufbauer, “The Politics Behind the Original Star Wars,” Los Angeles Review of Books (December 21, 2015).

    [8] Frase, 49.

    [9]  Angry Workers World, “Soldering On: Report on Working in a 3D-Printer Manufacturing Plant in London,” libcom. org (March 24, 2017).

    [10] Johan Söderberg, “A Critique of 3D Printing as a Critical Technology,” P2P Foundation (March 16, 2013).

    [11] Franklin, “Star Trek in the Vietnam Era,” Science Fiction Studies, #62 = Volume 21, Part 1 (March 1994).

    [12] Frase, 6.

    [13] Ruth Levitas, Utopia As Method: The Imaginary Reconstitution of Society. (London: Palgrave Macmillan, 2013), xiv-xv.

    [14] Frase, 58.

    [15]  Jameson, “Utopia, Modernism, and Death,” in Seeds of Time (New York: Columbia University Press, 1996), 110.

    [16]  Kaleigh Rogers, “The Guy Who Ruined Dogecoin,” VICE Motherboard (March 6, 2015).

    [17] See Jodi Dean, Democracy and Other Neoliberal Fantasies: Communicative Capitalism and Left  Politics (Durham: Duke University Press, 2009).

    _____

    Works Cited

    • Frase, Peter. 2016. Four Futures: Life After Capitalism. New York: Verso.
    • Jameson, Fredric. 1982. “Progress vs. Utopia; Or Can We Imagine The Future?” Science Fiction Studies 9:2 (July). 147-158
    • Jameson, Fredric. 1996. “Utopia, Modernism, and Death,” in Seeds of Time. New York: Columbia University Press.
    • Jameson, Fredric. 2005. Archaeologies of the Future: The Desire Called Utopia and Other Science Fictions. London: Verso.
    • Levitas, Ruth. 2013. Utopia As Method; The Imaginary Reconstitution of Society. London: Palgrave Macmillan.
    • Moylan, Tom. 2000. Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia. Boulder: Westview Press.
    • Smith, Jason E. 2017. “Nowhere To Go: Automation Then And Now.” The Brooklyn Rail (March 1).

     

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    Zachary Loeb – What Technology Do We Really Need? – A Critique of the 2016 Personal Democracy Forum

    by Zachary Loeb

    ~

    Technological optimism is a dish best served from a stage. Particularly if it’s a bright stage in front of a receptive and comfortably seated audience, especially if the person standing before the assembled group is delivering carefully rehearsed comments paired with compelling visuals, and most importantly if the stage is home to a revolving set of speakers who take turns outdoing each other in inspirational aplomb. At such an event, even occasional moments of mild pessimism – or a rogue speaker who uses their fifteen minutes to frown more than smile – serve to only heighten the overall buoyant tenor of the gathering. From TED talks to the launching of the latest gizmo by a major company, the person on a stage singing the praises of technology has become a familiar cultural motif. And it is a trope that was alive and drawing from that well at the 2016 Personal Democracy Forum, the theme of which was “The Tech We Need.”

    Over the course of two days some three-dozen speakers and a similar number of panelists gathered to opine on the ways in which technology is changing democracy to a rapt and appreciative audience. The commentary largely aligned with the sanguine spirit animating the founding manifesto of the Personal Democracy Forum (PDF) – which frames the Internet as a potent force set to dramatically remake and revitalize democratic society. As the manifesto boldly decrees, “the realization of ‘Personal Democracy,’ where everyone is a full participant, is coming” and it is coming thanks to the Internet. The two days of PDF 2016 consisted of a steady flow of intelligent, highly renowned, well-meaning speakers expounding on the conference’s theme to an audience largely made up of bright caring individuals committed to answering that call. To attend an event like PDF and not feel moved, uplifted or inspired by the speakers would be a testament to an empathic failing. How can one not be moved? But when one’s eyes are glistening and when one’s heart is pounding it is worth being wary of the ideology in which one is being baptized.

    To critique an event like the Personal Democracy Forum – particularly after having actually attended it – is something of a challenge. After all, the event is truly filled with genuine people delivering (mostly) inspiring talks. There is something contagious about optimism, especially when it presents itself as measured optimism. And besides, who wants to be the jerk grousing and grumbling after an activist has just earned a standing ovation? Who wants to cross their arms and scoff that the criticism being offered is precisely the type that serves to shore up the system being criticized? Pessimists don’t often find themselves invited to the after party. Thus, insofar as the following comments – and those that have already been made – may seem prickly and pessimistic it is not meant as an attack upon any particular speaker or attendee. Many of those speakers truly were inspiring (and that is meant sincerely), many speakers really did deliver important comments (that is also meant sincerely), and the goal here is not to question the intentions of PDF’s founders or organizers. Yet prominent events like PDF are integral to shaping the societal discussions surrounding technology – and therefore it is essential to be willing to go beyond the inspirational moments and ask: what is really being said here?

    For events like PDF do serve to advance an ideology, whether they like it or not. And it is worth considering what that ideology means, even if it forces one to wipe the smile from one’s lips. And when it comes to PDF much of its ideology can be discovered simply by dissecting the theme for the 2016 conference: “The Tech We Need.”

    “The Tech”

    What do you (yes, you) think of when you hear the word technology? After all, it is a term that encompasses a great deal, which is one of the reasons why Leo Marx (1997) was compelled to describe technology as a “hazardous concept.” Eyeglasses are technology, but so too is Google Glass. A hammer is technology, and so too is a smart phone. In other words, when somebody says “technology is X” or “technology does Q” or “technology will result in R” it is worth pondering whether technology really is, does or results in those things, or if what is being discussed is really a particular type of technology in a particular context. Granted, technology remains a useful term, it is certainly a convenient shorthand (one which very many people [including me] are guilty of occasionally deploying), but in throwing the term technology about so casually it is easy to obfuscate as much as one clarifies. At PDF it seemed as though a sentence was not complete unless it included a noun, a verb and the word technology – or “tech.” Yet what was meant by “tech” at PDF almost always meant the Internet or a device linked to the Internet – and qualifying this by saying “almost” is perhaps overly generous.

    Thus the Internet (as such), web browsers, smart phones, VR, social networks, server farms, encryption, other social networks, apps, and websites all wound up being pleasantly melted together into “technology.” When “technology” encompasses so much a funny thing begins to happen – people speak effusively about “technology” and only name specific elements when they want to single something out for criticism. When technology is so all encompassing who can possibly criticize technology? And what would it mean to criticize technology when it isn’t clear what is actually meant by the term? Yes, yes, Facebook may be worthy of mockery and smart phones can be used for surveillance but insofar as the discussion is not about the Internet but “technology” on what grounds can one say: “this stuff is rubbish”? For even if it is clear that the term “technology” is being used in a way that focuses on the Internet if one starts to seriously go after technology than one will inevitably be confronted with the question “but aren’t hammers also technology?” In short, when a group talks about “the tech” but by “the tech” only means the Internet and the variety of devices tethered to it, what happens is that the Internet appears as being synonymous with technology. It isn’t just a branch or an example of technology, it is technology! Or to put this in sharper relief: at a conference about “the tech we need” held in the US in 2016 how can one avoid talking about the technology that is needed in the form of water pipes that don’t poison people? The answer: by making it so that the term “technology” does not apply to such things.

    The problem is that when “technology” is used to only mean one set of things it muddles the boundaries of what those things are, and what exists outside of them. And while it does this it allows people to confidently place trust in a big category, “technology,” whereas they would probably have been more circumspect if they were just being asked to place trust in smart phones. After all, “the Internet will save us” doesn’t have quite the same seductive sway as “technology will save us” – even if the belief is usually put more eloquently than that. When somebody says “technology will save us” people can think of things like solar panels and vaccines – even if the only technology actually being discussed is the Internet. Here, though, it is also vital to approach the question of “the tech” with some historically grounded modesty in mind. For the belief that technology is changing the world and fundamentally altering democracy is nothing new. The history of technology (as an academic field) is filled with texts describing how a new tool was perceived as changing everything – from the compass to the telegraph to the phonograph to the locomotive to the [insert whatever piece of technology you (the reader) can think of]. And such inventions were often accompanied by an, often earnest, belief that these inventions would improve everything for the better! Claims that the Internet will save us, invoke déjà vu for those with a familiarity with the history of technology. Carolyn Marvin’s masterful study When Old Technologies Were New (1988) examines the way in which early electrical communications methods were seen at the time of their introduction, and near the book’s end she writes:

    Predictions that strife would cease in a world of plenty created by electrical technology were clichés breathed by the influential with conviction. For impatient experts, centuries of war and struggle testified to the failure of political efforts to solve human problems. The cycle of resentment that fueled political history could perhaps be halted only in a world of electrical abundance, where greed could not impede distributive justice. (206)

    Switch out the words ”electrical technology” for “Internet technology” and the above sentences could apply to the present (and the PDF forum) without further alterations. After all, PDF was certainly a gathering of “the influential” and of “impatient experts.”

    And whenever “tech” and democracy are invoked in the same sentence it is worth pondering whether the tech is itself democratic, or whether it is simply being claimed that the tech can be used for democratic purposes. Lewis Mumford wrote at length about the difference between what he termed “democratic” and “authoritarian” technics – in his estimation “democratic” systems were small scale and manageable by individuals, whereas “authoritarian” technics represented massive systems of interlocking elements where no individual could truly assert control. While Mumford did not live to write about the Internet, his work makes it very clear that he did not consider computer technologies to belong to the “democratic” lineage. Thus, to follow from Mumford, the Internet appears as a wonderful example of an “authoritarian” technic (it is massive, environmentally destructive, turns users into cogs, runs on surveillance, cannot be controlled locally, etc…) – what PDF argues for is that this authoritarian technology can be used democratically. There is an interesting argument there, and it is one with some merit. Yet such a discussion cannot even occur in the confusing morass that one finds oneself in when “the tech” just means the Internet.

    Indeed, by meaning “the Internet” but saying “the tech” groups like PDF (consciously or not) pull a bait and switch whereby a genuine consideration of what “the tech we need” simply becomes a consideration of “the Internet we need.”

    “We”

    Attendees to the PDF conference received a conference booklet upon registration; it featured introductory remarks, a code of conduct, advertisements from sponsors, and a schedule. It also featured a fantastically jarring joke created through the wonders of, perhaps accidental, juxtaposition; however, to appreciate the joke one needed to open the booklet so as to be able to see the front and back cover simultaneously. Here is what that looked like:

    Personal Democracy Forum (2016)

    Get it?

    Hilarious.

    The cover says “The Tech We Need” emblazoned in blue over the faces of the conference speakers, and the back is an advertisement for Microsoft stating: “the future is what we make it.” One almost hopes that the layout was intentional. For, who the heck is the “we” being discussed? Is it the same “we”? Are you included in that “we”? And this is a question that can be asked of each of those covers independently of the other: when PDF says “we” who is included and who is excluded? When Microsoft says “we” who is included and who is excluded? Of course, this gets muddled even more when you consider that Microsoft was the “presenting sponsor” for PDF and that many of the speakers at PDF have funding ties to Microsoft. The reason this is so darkly humorous is that there is certainly an argument to be made that “the tech we need” has no place for mega-corporations like Microsoft, while at the same time the booklet assures that “the future is what we [Microsoft] make it.” In short: the future is what corporations like Microsoft will make it…which might be very different from the kind of tech we need.

    In considering the “we” of PDF it is worth restating that this is a gathering of well-meaning individuals who largely seem to want to approach the idea of “we” with as much inclusivity as possible. Yet defining a “we” is always fraught, speaking for a “we” is always dangerous, and insofar as one can think of PDF with any kind of “we” (or “us”) in mind the only version of the group that really emerges is one that leans heavily towards describing the group actually present at the event. And while one can certainly speak about the level (or lack) of diversity at the PDF event – the “we” who came together at PDF is not particularly representative of the world. This was also brought into interesting relief in some other amusing ways: throughout the event one heard numerous variations of the comment “we all have smart phones” – but this did not even really capture the “we” of PDF. While walking down the stairs to a session one day I clearly saw a man (wearing a conference attendee badge) fiddling with a flip-phone – I suppose he wasn’t included in the “we” of “we all have smart phones.” But I digress.

    One encountered further issues with the “we” when it came to the political content of the forum. While the booklet states, and the hosts repeated over and over, that the event was “non-partisan” such a descriptor is pretty laughable. Those taking to the stage were a procession of people who had cut their teeth working for MoveOn and the activists represented continually self-identified as hailing from the progressive end of the spectrum. The token conservative speaker who stepped onto the stage even made a self-deprecating joke in which she recognized that she was one of only a handful (if that) of Republicans present. So, again, who is missing from this “we”? One can be a committed leftist and genuinely believe that a figure like Donald Trump is a xenophobic demagogue – and still recognize that some of his supporters might have offered a very interesting perspective to the PDF conversation. After all, the Internet (“the tech”) has certainly been used by movements on the right as well – and used quite effectively at that. But this part of a national “we” was conspicuously absent from the forum even if they are not nearly so absent from Twitter, Facebook, or the population of people owning smart phones. Again, it is in no way shape or form an endorsement of anything that Trump has said to point out that when a forum is held to discuss the Internet and democracy that it is worth having the people you disagree with present.

    Another question of the “we” that is worth wrestling with revolves around the way in which events like PDF involve those who offer critical viewpoints. If, as is being argued here, PDF’s basic ideology is that the Internet (“the tech”) is improving people’s lives and will continue to do so (leading towards “personal democracy”) – it is important to note that PDF welcomed several speakers who offered accounts of some of the shortcomings of the Internet. Figures including Sherry Turkle, Kentaro Toyama, Safiya Noble, Kate Crawford, danah boyd, and Douglas Rushkoff all took the stage to deliver some critical points of view – and yet in incorporating such voices into the “we” what occurs is that these critiques function less as genuine retorts and more as safety valves that just blow off a bit of steam. Having Sherry Turkle (not to pick on her) vocally doubt the empathetic potential of the Internet just allows the next speaker (and countless conference attendees) to say “well, I certainly don’t agree with Sherry Turkle.” Nevertheless, one of the best ways to inoculate yourself against the charge of unthinking optimism is to periodically turn the microphone over to a critic. But perhaps the most important things that such critics say are the ways in which they wind up qualifying their comments – thus Turkle says “I’m not anti-technology,” Toyama disparages Facebook only to immediately add “I love Facebook,” and fears regarding the threat posed by AI get laughed off as the paranoia of today’s “apex predators” (rich white men) being concerned that they will lose their spot at the top of the food chain. The environmental costs of the cloud are raised, the biased nature of algorithms is exposed – but these points are couched against a backdrop that says to the assembled technologists “do better” not “the Internet is a corporately controlled surveillance mall, and it’s overrated.” The heresies that are permitted are those that point out the rough edges that need to be rounded so that the pill can be swallowed. To return to the previous paragraph, this is not to say that PDF needs to invite John Zerzan or Chellis Glendinning to speak…but one thing that would certainly expose the weaknesses of the PDF “we” is to solicit viewpoints that genuinely come from outside of that “we.” Granted, PDF is more TED talk than FRED talk.

    And of course, and most importantly, one must think of the “we” that goes totally unheard. Yes, comments were made about the environmental cost of the cloud and passing phrases recognized mining – but PDF’s “we” seems to mainly refer to a “we” defined as those who use the Internet and Internet connected devices. Miners, those assembling high-tech devices, e-waste recyclers, and the other victims of those processes are only a hazy phantom presence. They are mentioned in passing, but not ever included fully in the “we.” PDF’s “the tech we need” is for a “we” that loves the Internet and just wants it to be even better and perhaps a bit nicer, while Microsoft’s we in “the future is what we make it” is a “we” that is committed to staying profitable. But amidst such statements there is an even larger group saying: “we are not being included.” That unheard “we” being the same “we” from the classic IWW song “we have fed you all for a thousand years” (Green et al 2016). And as the second line of that song rings out “and you hail us still unfed.”

    “Need”

    When one looks out upon the world it is almost impossible not to be struck by how much is needed. People need homes, people need –not just to be tolerated – but accepted, people need food, people need peace, people need stability, people need the ability to love without being subject to oppression, people need to be free from bigotry and xenophobia, people need…this list could continue with a litany of despair until we all don sackcloth. But do people need VR headsets? Do people need Facebook or Twitter? Do those in the possession of still-functioning high-tech devices need to trade them in every eighteen months? Of course it is important to note that technology does have an important role in meeting people’s needs – after all “shelter” refers to all sorts of technology. Yet, when PDF talks about “the tech we need” the “need” is shaded by what is meant by “the tech” and as was previously discussed that really means “the Internet.” Therefore it is fair to ask, do people really “need” an iPhone with a slightly larger screen? Do people really need Uber? Do people really need to be able to download five million songs in thirty seconds? While human history is a tale of horror it requires a funny kind of simplistic hubris to think that World War II could have been prevented if only everybody had been connected on Facebook (to be fair, nobody at PDF was making this argument). Are today’s “needs” (and they are great) really a result of a lack of technology? It seems that we already have much of the tech that is required to meet today’s needs, and we don’t even require new ways to distribute it. Or, to put it clearly at the risk of being grotesque: people in your city are not currently going hungry because they lack the proper app.

    The question of “need” flows from both the notion of “the tech” and “we” – and as was previously mentioned it would be easy to put forth a compelling argument that “the tech we need” involves water pipes that don’t poison people with lead, but such an argument is not made when “the tech” means the Internet and when the “we” has already reached the top of Maslow’s hierarchy of needs. If one takes a more expansive view of “the tech” and “we” than the range of what is needed changes accordingly. This issue – the way “tech” “we” and “need” intersect – is hardly a new concern. It is what prompted Ivan Illich (1973) to write, in Tools for Conviviality, that:

    People need new tools to work with rather than tools that ‘work’ for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves. (10)

    Granted, it is certainly fair to retort “but who is the ‘we’ referred to by Illich” or “why can’t the Internet be the type of tool that Illich is writing about” – but here Illich’s response would be in line with the earlier referral to Mumford. Namely: accusations of technological determinism aside, maybe it’s fair to say that some technologies are oversold, and maybe the occasional emphasis on the way that the Internet helps activists serves as a patina that distracts from what is ultimately an environmentally destructive surveillance system. Is the person tethered to their smart phone being served by that device – or are they serving it? Or, to allow Illich to reply with his own words:

    As the power of machines increases, the role of persons more and more decreases to that of mere consumers. (11)

    Mindfulness apps, cameras on phones that can be used to film oppression, new ways of downloading music, programs for raising money online, platforms for connecting people on a political campaign – the user is empowered as a citizen but this empowerment tends to involve needing the proper apps. And therefore that citizen needs the proper device to run that app, and a good wi-fi connection, and… the list goes on. Under the ideology captured in the PDF’s “the tech we need” to participate in democracy becomes bound up with “to consume the latest in Internet innovation.” Every need can be met, provided that it is the type of need, which the Internet can meet. Thus the old canard “to the person with a hammer every problem looks like a nail” finds its modern equivalent in “to the person with a smart phone and a good wi-fi connection, every problem looks like one that can be solved by using the Internet.” But as for needs? Freedom from xenophobia and oppression are real needs – undoubtedly – but the Internet has done a great deal to disseminate xenophobia and prop up oppressive regimes. Continuing to double down on the Internet seems like doing the same thing “we” have been doing and expecting different results because finally there’s an “app for that!”

    It is, again, quite clear that those assembled at PDF came together with well-meaning attitudes, but as Simone Weil (2010) put it:

    Intentions, by themselves, are not of any great importance, save when their aim is directly evil, for to do evil the necessary means are always within easy reach. But good intentions only count when accompanied by the corresponding means for putting them into effect. (180)

    The ideology present at PDF emphasizes that the Internet is precisely “the means” for the realization of its attendees’ good intentions. And those who took to the stage spoke rousingly of using Facebook, Twitter, smart phones, and new apps for all manner of positive effects – but hanging in the background (sometimes more clearly than at other times) is the fact that these systems also track their users’ every move and can be used just as easily by those with very different ideas as to what “positive effects” look like. The issue of “need” is therefore ultimately a matter not simply of need but of “ends” – but in framing things in terms of “the tech we need” what is missed is the more difficult question of what “ends” do we seek. Instead “the tech we need” subtly shifts the discussion towards one of “means.” But, as Jacques Ellul, recognized the emphasis on means – especially technological ones – can just serve to confuse the discussion of ends. As he wrote:

    It must always be stressed that our civilization is one of means…the means determine the ends, by assigning us ends that can be attained and eliminating those considered unrealistic because our means do not correspond to them. At the same time, the means corrupt the ends. We live at the opposite end of the formula that ‘the ends justify the means.’ We should understand that our enormous present means shape the ends we pursue. (Ellul 2004, 238)

    The Internet and the raft of devices and platforms associated with it are a set of “enormous present means” – and in celebrating these “means” the ends begin to vanish. It ceases to be a situation where the Internet is the mean to a particular end, and instead the Internet becomes the means by which one continues to use the Internet so as to correct the current problems with the Internet so that the Internet can finally achieve the… it is a snake eating its own tail.

    And its own tale.

    Conclusion: The New York Ideology

    In 1995, Richard Barbrook and Andy Cameron penned an influential article that described what they called “The Californian Ideology” which they characterized as

    promiscuously combin[ing] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich. (Barbrook and Cameron 2001, 364)

    As the placing of a state’s name in the title of the ideology suggests, Barbrook and Cameron were setting out to describe the viewpoint that was underneath the firms that were (at that time) nascent in Silicon Valley. They sought to describe the mixture of hip futurism and libertarian politics that worked wonderfully in the boardroom, even if there was now somebody in the boardroom wearing a Hawaiian print shirt – or perhaps jeans and a hoodie. As companies like Google and Facebook have grown the “Californian Ideology” has been disseminated widely, and though such companies periodically issued proclamations about not being evil and claimed that connecting the world was their goal they maintained their utopian confidence in the “independence of cyberspace” while directing a distasteful gaze towards the “dinosaurs” of representative democracy that would dare to question their zeal. And though it is a more recent player in the game, one is hard-pressed to find a better example than Uber of the fact that this ideology is alive and well.

    The Personal Democracy Forum is not advancing the Californian Ideology. And though the event may have featured a speaker who suggested that the assembled “we” think of the “founding fathers” as start-up founders – the forum continually returned to the questions of democracy. While the Personal Democracy Forum shares the “faith in the emancipatory potential of the new information technologies” with Silicon Valley startups it seems less “free-wheeling” and more skeptical of “entrepreneurial zeal.” In other words, whereas Barbrook and Cameron spoke of “The Californian Ideology” what PDF makes clear is that there is also a “New York Ideology.” Wherein the ideological hallmark is an embrace of the positive potential of new information technologies tempered by the belief that such potential can best be reached by taming the excesses of unregulated capitalism. Where the Californian Ideology says “libertarian” the New York Ideology says “liberation.” Where the Californian Ideology celebrates capital the New York Ideology celebrates the power found in a high-tech enhanced capitol. The New York Ideology balances the excessive optimism of the Californian Ideology by acknowledging the existence of criticism, and proceeds to neutralize this criticism by making it part and parcel of the celebration of the Internet’s potential. The New York Ideology seeks to correct the hubris of the Californian Ideology by pointing out that it is precisely this hubris that turns many away from the faith in the “emancipatory potential.” If the Californian Ideology is broadcast from the stage at the newest product unveiling or celebratory conference, than the New York Ideology is disseminated from conferences like PDF and the occasional skeptical TED talk. The New York Ideology may be preferable to the Californian Ideology in a thousand ways – but ultimately it is the ideology that manifests itself in the “we” one encounters in the slogan “the tech we need.”

    Or, to put it simply, whereas the Californian Ideology is “wealth meaning,” the New York Ideology is “well-meaning.”

    Of course, it is odd and unfair to speak of either ideology as “Californian” or “New York.” California is filled with Californians who do not share in that ideology, and New York is filled with New Yorkers who do not share in that ideology either. Yet to dub what one encounters at PDF to be “The New York Ideology” is to indicate the way in which current discussions around the Internet are not solely being framed by “The Californian Ideology” but also by a parallel position wherein faith in Internet enabled solutions puts aside its libertarian sneer to adopt a democratic smile. One could just as easily call the New York Ideology the “Tech On Stage Ideology” or the “Civic Tech Ideology” – perhaps it would be better to refer to the Californian Ideology as the SV Ideology (silicon valley) and the New York Ideology as the CV ideology (civic tech). But if the Californian Ideology refers to the tech campus in Silicon Valley than the New York Ideology refers to the foundation based in New York – that may very well be getting much of its funding from the corporations that call Silicon Valley home. While Uber sticks with the Californian Ideology, companies like Facebook have begun transitioning to the New York Ideology so that they can have their panoptic technology and their playgrounds too. Whilst new tech companies emerging in New York (like Kickstarter and Etsy) make positive proclamations about ethics and democracy by making it seem that ethics and democracy are just more consumption choices that one picks from the list of downloadable apps.

    The Personal Democracy Forum is a fascinating event. It is filled with intelligent individuals who speak of democracy with unimpeachable sincerity, and activists who really have been able to use the Internet to advance their causes. But despite all of this, the ideological emphasis on “the tech we need” remains based upon a quizzical notion of “need,” a problematic concept of “we,” and a reductive definition of “tech.” For statements like “the tech we need” are not value neutral – and even if the surface ethics are moving and inspirational, sometimes a problematic ideology is most easily disseminated when it takes care to dispense with ideologues. And though the New York Ideology is much more subtle than the Californian Ideology – and makes space for some critical voices – it remains a vehicle for disseminating an optimistic faith that a technologically enhanced Moses shall lead us into the high-tech promised land.

    The 2016 Personal Democracy Forum put forth an inspirational and moving vision of “the tech we need.”

    But when it comes to promises of technological salvation, isn’t it about time that “we” stopped getting our hopes up?

    Coda

    I confess, I am hardly free of my own ideological biases. And I recognize that everything written here may simply be dismissed of by those who find it hypocritical that I composed such remarks on a computer and then posted them online. But I would say that the more we find ourselves using technology the more careful we must be that we do not allow ourselves to be used by that technology.

    And thus, I shall simply conclude by once more citing a dead, but prescient, pessimist:

    I have no illusions that my arguments will convince anyone. (Ellul 1994, 248)

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Barbrook, Richard and Andy Cameron. 2001. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates and Pirate Utopias. Cambridge: MIT Press. 363-387.
    • Ellul, Jacques. 2004. The Political Illusion. Eugene, OR: Wipf and Stock.
    • Ellul, Jacques. 1994. A Critique of the New Commonplaces. Eugene, OR: Wipf and Stock.
    • Green, Archie, David Roediger, Franklin Rosemont, and Salvatore Salerno. 2016. The Big Red Songbook: 250+ IWW Songs! Oakland, CA: PM Press.
    • Illich, Ivan. 1973. Tools for Conviviality. New York: Harper and Row.
    • Marvin, Carolyn. 1988. When Old Technologies Were New: Thinking About Electric Communication in the Late Nineteenth Century. New York: Oxford University Press.
    • Marx, Leo. 1997. “‘Technology’: The Emergence of a Hazardous Concept.” Social Research 64:3 (Fall). 965-988.
    • Mumford, Lewis. 1964. “Authoritarian and Democratic Technics.” in Technology and Culture, 5:1 (Winter). 1-8.
    • Weil, Simone. 2010. The Need for Roots. London: Routledge.
  • The Social Construction of Acceleration

    The Social Construction of Acceleration

    Judy Wajcman, Pressed for Time (Chicago, 2014)a review of Judy Wajcman, Pressed for Time: The Acceleration of Life in Digital Capitalism (Chicago, 2014)
    by Zachary Loeb

    ~

    Patience seems anachronistic in an age of high speed downloads, same day deliveries, and on-demand assistants who can be summoned by tapping a button. Though some waiting may still occur the amount of time spent in anticipation seems to be constantly diminishing, and every day a new bevy of upgrades and devices promise that tomorrow things will be even faster. Such speed is comforting for those who feel that they do not have a moment to waste. Patience becomes a luxury for which we do not have time, even as the technologies that claimed they would free us wind up weighing us down.

    Yet it is far too simplistic to heap the blame for this situation on technology, as such. True, contemporary technologies may be prominent characters in the drama in which we are embroiled, but as Judy Wajcman argues in her book Pressed for Time, we should not approach technology as though it exists separately from the social, economic, and political factors that shape contemporary society. Indeed, to understand technology today it is necessary to recognize that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires” (3). In Wajcman’s view, technology is not the true culprit, nor is it an out-of-control menace. It is instead a convenient distraction from the real forces that make it seem as though there is never enough time.

    Wajcman sets a course that refuses to uncritically celebrate technology, whilst simultaneously disavowing the damning of modern machines. She prefers to draw upon “a social shaping approach to technology” (4) which emphasizes that the shape technology takes in a society is influenced by many factors. If current technologies leave us feeling exhausted, overwhelmed, and unsatisfied it is to our society we must look for causes and solutions – not to the machine.

    The vast array of Internet-connected devices give rise to a sense that everything is happening faster, that things are accelerating, and that compared to previous epochs things are changing faster. This is the kind of seemingly uncontroversial belief that Wajcman seeks to counter. While there is a present predilection for speed, the ideas of speed and acceleration remain murky, which may not be purely accidental when one considers “the extent to which the agenda for discussing the future of technology is set by the promoters of new technological products” (14). Rapid technological and societal shifts may herald the emergence of a “acceleration society” wherein speed increases even as individuals experience a decrease of available time. Though some would describe today’s world (at least in affluent nations) as being a synecdoche of the “acceleration society,” it would be a mistake to believe this to be a wholly new invention.

    Nevertheless the instantaneous potential of information technologies may seem to signal a break with the past – as the sort of “timeless time” which “emerged in financial markets…is spreading to every realm” (19). Some may revel in this speed even as others put out somber calls for a slow-down, but either approach risks being reductionist. Wajcman pushes back against the technological determinism lurking in the thoughts of those who revel and those who rebel, noting “that all technologies are inherently social in that they are designed, produced, used and governed by people” (27).

    Both today and yesterday “we live our lives surrounded by things, but we tend to think about only some of them as being technologies” (29). The impacts of given technologies depend upon the ways in which they are actually used, and Wajcman emphasizes that people often have a great deal of freedom in altering “the meanings and deployment of technologies” (33).

    Over time certain technologies recede into the background, but the history of technology is of a litany of devices that made profound impacts in determining experiences of time and speed. After all, the clock is itself a piece of technology, and thus we assess our very lack of time by looking to a device designed to measure its passage. The measurement of time was a technique used to standardize – and often exploit – labor, and the ability to carefully keep track of time gave rise to an ideology in which time came to be interchangeable with money. As a result speed came to be associated with profit even as slowness became associated with sloth. The speed of change became tied up in notions of improvement and progress, and thus “the speed of change becomes a self-evident good” (44). The speed promised by inventions are therefore seen as part of the march of progress, though a certain irony emerges as widespread speed leads to new forms of slowness – the mass diffusion of cars leading to traffic jams, And what was fast yesterday is often deemed slow today. As Wajcman shows, the experience of time compression that occurs tied to “our valorization of a busy lifestyle, as well as our profound ambivalence toward it” (58), has roots that go far back.

    Time takes on an odd quality – to have it is a luxury, even as constant busyness becomes a sign of status. A certain dissonance emerges wherein individuals feel that they have less time even as studies show that people are not necessarily working more hours. For Wajcman much of the explanation is related to “real increases in the combined work commitments of family members as it is about changes in the working time of individuals” with such “time poverty” being experienced particularly acutely “among working mothers, who juggle work, family, and leisure” (66). To understand time pressure it is essential to consider the degree to which people are free to use their time as they see fit.

    Societal pressures on the time of men and women differ, and though the hours spent doing paid labor may not have shifted dramatically, the hours parents (particularly mothers) spend performing unpaid labor remains high. Furthermore, “despite dramatic improvements in domestic technology, the amount of time spent on household tasks has not actually shown any corresponding dramatic decline” (68). Though household responsibilities can be shared equitably between partners, much of the onus still falls on women. As a busy event-filled life becomes a marker of status for adults so too may they attempt to bestow such busyness on the whole family, but busy parents needing to chaperone and supervise busy children only creates a further crunch on time. As Wajcman notes “perhaps we should be giving as much attention to the intensification of parenting as to the intensification of work” (82).

    Yet the story of domestic, unpaid and unrecognized, labor is a particularly strong example of a space wherein the promises of time-saving technological fixes have fallen short. Instead, “devices allegedly designed to save labor time fail to do so, and in some cases actually increase the time needed for the task” (111). The variety of technologies marketed for the household are often advertised as time savers, yet altering household work is not the same as eliminating it – even as certain tasks continually demand a significant investment of real time.

    Many of the technologies that have become mainstays of modern households – such as the microwave – were not originally marketed as such, and thus the household represents an important example of the way in which technologies “are both socially constructed and society shaping” (122). Of further significance is the way in which changing labor relations have also lead to shifts in the sphere of domestic work, wherein those who can afford it are able to buy themselves time through purchasing food from restaurants or by employing others for tasks such as child care and cleaning. Though the image of “the home of the future,” courtesy of the Internet of Things, may promise an automated abode, Wajcman highlights that those making and selling such technologies replicate society’s dominant blind spot for the true tasks of domestic labor. Indeed, the Internet of Things tends to “celebrate technology and its transformative power at the expense of home as a lived practice.” (130) Thus, domestic technologies present an important example of the way in which those designing and marketing technologies instill their own biases into the devices they build.

    Beyond the household, information communications technologies (ICTs) allow people to carry their office in their pocket as e-mails and messages ping them long after the official work day has ended. However, the idea “of the technologically tethered worker with no control over their own time…fails to convey the complex entanglement of contemporary work practices, working time, and the materiality of technical artifacts” (88). Thus, the problem is not that an individual can receive e-mail when they are off the clock, the problem is the employer’s expectation that this worker should be responding to work related e-mails while off the clock – the issue is not technological, it is societal. Furthermore, Wajcman argues, communications technologies permit workers to better judge whether or not something is particularly time sensitive. Though technology has often been used by employers to control employees, approaching communications technologies from an STS position “casts doubt on the determinist view that ICTs, per se, are driving the intensification of work” (107). Indeed some workers may turn to such devices to help manage this intensification.

    Technologies offer many more potentialities than those that are presented in advertisements. Though the ubiquity of communications devices may “mean that more and more of our social relationships are machine-mediated” (138), the focus should be as much on the word “social” as on the word “machine.” Much has been written about the way that individuals use modern technologies and the ways in which they can give rise to families wherein parents and children alike are permanently staring at a screen, but Wajcman argues that these technologies should “be regarded as another node in the flows of affect that create and bind intimacy” (150). It is not that these devices are truly stealing people’s time, but that they are changing the ways in which people spend the time they have – allowing harried individuals to create new forms of being together which “needs to be understood as adding a dimension to temporal experience” (158) which blurs boundaries between work and leisure.

    The notion that the pace of life has been accelerated by technological change is a belief that often goes unchallenged; however, Wajcman emphasizes that “major shifts in the nature of work, the composition of families, ideas about parenting, and patterns of consumption have all contributed to our sense that the world is moving faster than hitherto” (164). The experience of acceleration can be intoxicating, and the belief in a culture of improvement wrought by technological change may be a rare glimmer of positivity amidst gloomy news reports. However, “rapid technological change can actually be conservative, maintaining or solidifying existing social arrangements” (180). At moments when so much emphasis is placed upon the speed of technologically sired change the first step may not be to slow-down but to insist that people consider the ways in which these machines have been socially constructed, how they have shaped society – and if we fear that we are speeding towards a catastrophe than it becomes necessary to consider how they can be socially constructed to avoid such a collision.

    * * *

    It is common, amongst current books assessing the societal impacts of technology, for authors to present themselves as critical while simultaneously wanting to hold to an unshakable faith in technology. This often leaves such texts in an odd position: they want to advance a radical critique but their argument remains loyal to a conservative ideology. With Pressed for Time, Judy Wajcman, has demonstrated how to successfully achieve the balance between technological optimism and pessimism. It is a great feat, and Pressed for Time executes this task skillfully. When Wajcman writes, towards the end of the book, that she wants “to embrace the emancipatory potential of technoscience to create new meanings and new worlds while at the same time being its chief critic” (164) she is not writing of a goal but is affirming what she has achieved with Pressed for Time (a similar success can be attributed to Wajcman’s earlier books TechnoFeminism (Polity, 2004) and the essential Feminism Confronts Technology (Penn State, 1991).

    By holding to the framework of the social shaping of technology, Pressed for Time provides an investigation of time and speed that is grounded in a nuanced understanding of technology. It would have been easy for Wajcman to focus strictly on contemporary ICTs, but what her argument makes clear is that to do so would have been to ignore the facts that make contemporary technology understandable. A great success of Pressed for Time is the way in which Wajcman shows that the current sensation of being pressed for time is not a modern invention. Instead, the emphasis on speed as being a hallmark of progress and improvement is a belief that has been at work for decades. Wajcman avoids the stumbling block of technological determinism and carefully points out that falling for such beliefs leads to critiques being directed incorrectly. Written in a thoroughly engaging style, Pressed for Time is an academic book that can serve as an excellent introduction to the terminology and style of STS scholarship.

    Throughout Pressed for Time, Wajcman repeatedly notes the ways in which the meanings of technologies transcend what a device may have been narrowly intended to do. For Wajcman people’s agency is paramount as people have the ability to construct meaning for technology even as such devices wind up shaping society. Yet an area in which one could push back against Wajcman’s views would be to ask if communications technologies have shaped society to such an extent that it is becoming increasingly difficult to construct new meanings for them. Perhaps the “slow movement,” which Wajcman describes as unrealistic for “we cannot in fact choose between fast and slow, technology and nature” (176), is best perceived as a manifestation of the sense that much of technology’s “emancipatory potential” has gone awry – that some technologies offer little in the way of liberating potential. After all, the constantly connected individual may always feel rushed – but they may also feel as though they are under constant surveillance, that their every online move is carefully tracked, and that through the rise of wearable technology and the Internet of Things that all of their actions will soon be easily tracked. Wajcman makes an excellent and important point by noting that humans have always lived surrounded by technologies – but the technologies that surrounded an individual in 1952 were not sending every bit of minutiae to large corporations (and governments). Hanging in the background of the discussion of speed are also the questions of planned obsolescence and the mountains of toxic technological trash that wind up flowing from affluent nations to developing ones. The technological speed experienced in one country is the “slow violence” experienced in another. Though to make these critiques is to in no way to seriously diminish Wajcman’s argument, especially as many of these concerns simply speak to the economic and political forces that have shaped today’s technology.

    Pressed for Time is a Rosetta stone for decoding life in high speed, high tech societies. Wajcman deftly demonstrates that the problems facing technologically-addled individuals today are not as new as they appear, and that the solutions on offer are similarly not as wildly inventive as they may seem. Through analyzing studies and history, Wajcman shows the impacts of technologies, while making clear why it is still imperative to approach technology with a consideration of class and gender in mind. With Pressed for Time, Wajcman champions the position that the social shaping of technology framework still provides a robust way of understanding technology. As Wajcman makes clear the way technologies “are interpreted and used depends on the tapestry of social relations woven by age, gender, race, class, and other axes of inequality” (183).

    It is an extremely timely argument.
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Towards a Bright Mountain: Laudato Si' as Critique of Technology

    Towards a Bright Mountain: Laudato Si' as Critique of Technology

    by Zachary Loeb

    ~

    “We hate the people who make us form the connections we do not want to form.” – Simone Weil

    1. Repairing Our Common Home

    When confronted with the unsettling reality of the world it is easy to feel overwhelmed and insignificant. This feeling of powerlessness may give rise to a temptation to retreat – or to simply shrug – and though people may suspect that they bear some responsibility for the state of affairs in which they are embroiled the scale of the problems makes individuals doubtful that they can make a difference. In this context, the refrain “well, it could always be worse” becomes a sort of inured coping strategy, though this dark prophecy has a tendency to prove itself true week after week and year after year. Just saying that things could be worse than they presently are does nothing to prevent things from deteriorating further. It can be rather liberating to decide that one is powerless, to conclude that one’s actions do not truly matter, to imagine that one will be long dead by the time the bill comes due – for taking such positions enables one to avoid doing something difficult: changing.

    A change is coming. Indeed, the change is already here. The question is whether people are willing to consciously change to meet this challenge or if they will only change when they truly have no other option.

    The matter of change is at the core of Pope Francis’s recent encyclical Laudato Si’ (“Praise be to You”). Much of the discussion around Laudato Si’ has characterized the document as being narrowly focused on climate change and the environment. Though Laudato Si’ has much to say about the environment, and the threat climate change poses, it is rather reductive to cast Laudato Si’ as “the Pope’s encyclical about the environment.” Granted, that many are describing the encyclical in such terms is understandable as framing it in that manner makes it appear quaint – and may lead to many concluding that they do not need to spend the time reading through the encyclical’s 245 sections (roughly 200 pages). True, Pope Francis is interested in climate change, but in the encyclical he proves far more interested in the shifts in the social, economic, and political climate that have allowed climate change to advance. The importance of Laudato Si’ is precisely that it is less about climate change than it is about the need for humanity to change, as Pope Francis writes:

    “we cannot adequately combat environmental degradation unless we attend to causes related to human and social degradation.” (Francis, no. 48)

    And though the encyclical is filled with numerous pithy aphorisms it is a text that is worth engaging in its entirety.

    Lest there be any doubt, Laudato Si’ is a difficult text to read. Not because it is written in archaic prose, or because it assumes the reader is learned in theology, but because it is discomforting. Laudato Si’ does not tell the reader that they are responsible for the world, instead it reminds them that they have always been responsible for the world, and then points to some of the reasons why this obligation may have been forgotten. The encyclical calls on those with their heads in the clouds (or head in “the cloud”) to see they are trampling the poor and the planet underfoot. Pope Francis has the audacity to suggest, despite what the magazine covers and advertisements tell us, that there is no easy solution, and that if we are honest with ourselves we are not fulfilled by consumerism. What Laudato Si’ represents is an unabashed ethical assault on high-tech/high-consumption life in affluent nations. Yet it is not an angry diatribe. Insofar as the encyclical represents a hammer it is not as a blunt instrument with which one bludgeons foes into submission, but is instead a useful tool one might take up to pull out the rusted old nails in order to build again, as Pope Francis writes:

    “Humanity still has the ability to work together in building our common home.” (Francis, no. 13)

    Laudato Si’ is a work of intense, even radical, social criticism in the fine raiment of a papal encyclical. The text contains an impassioned critique of technology, an ethically rooted castigation of capitalism, a defense of the environment that emphasizes that humans are part of that same environment, and a demand that people accept responsibility. There is much in Laudato Si’ that those well versed in activism, organizing, environmentalism, critical theory, the critique of technology, radical political economy (and so forth) will find familiar – and it is a document that those bearing an interest in the aforementioned areas would do well to consider. While the encyclical (it was written by the Pope, after all) contains numerous references to Jesus, God, the Church, and the saints – it is clear that Pope Francis intends the document for a wide (not exclusively Catholic, or even Christian) readership. Indeed, those versed in other religious traditions will likely find much in the encyclical that echoes their own beliefs – and the same can likely be said of those interested in ethics with our without the presence of God. While many sections of Laudato Si’ speak to the religious obligation of believers, Pope Francis makes a point of being inclusive to those of different faiths (and no faith) – an inclusion which speaks to his recognition that the problems facing humanity can only be solved by all of humanity. After all:

    “we need only take a frank look at the facts to see that our common home is falling into serious disrepair.” (Francis, no. 61)

    The term “common home” refers to the planet and all those – regardless of their faith – who dwell there.

    Nevertheless, there are several sections in Laudato Si’ that will serve to remind the reader that Pope Francis is the male head of a patriarchal organization. Pope Francis stands firm in his commitment to the poor, and makes numerous comments about the rights of indigenous communities – but he does not have particularly much to say about women. While women certainly number amongst the poor and indigenous, Laudato Si’ does not devote attention to the ways in which the theologies and ideologies of dominance that have wreaked havoc on the planet have also oppressed women. It is perhaps unsurprising that the only woman Laudato Si’ focuses on at any length is Mary, and that throughout the encyclical Pope Francis continually feminizes nature whilst referring to God with terms such as “Father.” The importance of equality is a theme which is revisited numerous times in Laudato Si’ and though Pope Francis addresses his readers as “sisters and brothers” it is worth wondering whether or not this entails true equality between all people – regardless of gender. It is vital to recognize this shortcoming of Laudato Si’ – as it is a flaw that undermines much of the ethical heft of the argument.

    In the encyclical Pope Francis laments the lack of concern being shown to those – who are largely poor – already struggling against the rising tide of climate change, noting:

    “Our lack of response to these tragedies involving our brothers and sisters points to the loss of that sense of responsibility to our fellow men and women upon which all civil society is founded.” (Francis, no. 25)

    Yet it is worth pushing on this “sense of responsibility to our fellow men and women” – and doing so involves a recognition that too often throughout history (and still today) “civil society” has been founded on an emphasis on “fellow men” and not necessarily upon women. In considering responsibilities towards other people Simone Weil wrote:

    “The object of any obligation, in the realm of human affairs, is always the human being as such. There exists an obligation towards every human being for the sole reason that he or she is a human being, without any other condition requiring to be fulfilled, and even without any recognition of such obligation on the part of the individual concerned.” (Weil, 5 – The Need for Roots)

    To recognize that the obligation is due to “the human being as such” – which seems to be something Pope Francis is claiming – necessitates acknowledging that “the human being” is still often defined as male. And this is a bias that can easily be replicated, even in encyclicals that tout the importance of equality.

    There are aspects of Laudato Si’ that will give readers cause to furrow their brows; however, it would be unfortunate if the shortcomings of the encyclical led people to dismiss it completely. After all, Laudato Si’ is not a document that one reads, it is a text with which one wrestles. And, as befits a piece written by a former nightclub bouncer, Laudato Si’ proves to be a challenging and scrappy combatant. Granted, the easiest way to emerge victorious from a bout is to refuse to engage in it in the first place – which is the tactic that many seem to be taking towards Laudato Si’. Yet it should be noted that those whose responses are variations of “the Pope should stick to religion” are largely revealing that they have not seriously engaged with the encyclical. Laudato Si’ does not claim to be a scientific document, but instead recognizes – in understated terms – that:

    “A very solid scientific consensus indicates that we are presently witnessing a disturbing warming of the climate system.” (Francis, no. 23)

    And that,

    “Climate change is a global problem with grave implications: environmental, social, economic, political and for the distribution of goods. It represents one of the principal challenges facing humanity in our day. Its worst impact will probably be felt by developing countries in the coming decades.” (Francis, no. 25)

    However, when those who make a habit of paying no heed to scientists themselves make derisive comments that the Pope is not a scientist they are primarily delivering a television-news-bite-ready-quip which ignores that the climate Pope Francis is mainly concerned with today’s social, economic and political climate.

    As has been previously noted, Laudato Si’ is as much a work of stinging social criticism as it is a theological document. It is a text which benefits from the particular analysis of people – be they workers, theologians, activists, scholars, and the list could go on – with knowledge in the particular fields the encyclical touches upon. And yet, one of the most striking aspects of the encyclical – that which poses a particular challenge to the status quo – is way in which the document engages with technology.

    For, it may well be that Laudato Si’ will change the tone of current discussions around technology and its role in our lives.

    At least one might hope that it will do so.

    caption
    Image source: Photo of Pope Francis, Christoph Wagener via Wikipedia, with further modifications by the author of this piece.

    2. Meet the New Gods, Not the Same as the Old God

    Perhaps being a person of faith makes it easier to recognize the faith of others. Or, put another way, perhaps belief in God makes one attuned to the appearance of new gods. While some studies have shown that in recent years the number of individuals who do not adhere to a particular religious doctrine has risen, Laudadto Si’ suggests – though not specifically in these terms – that people may have simply turned to new religions. In the book To Be and To Have, Erich Fromm uses the term “religion” not to:

    “refer to a system that has necessarily to do with a concept of God or with idols or even to a system perceived as religion, but to any group-shared system of thought and action that offers the individual a frame of orientation and an object of devotion.” (Fromm, 135 – italics in original)

    Though the author of Laudato Si’, obviously, ascribes to a belief system that has a heck-of-a-lot to do “with a concept of God” – the main position of the encyclical is staked out in opposition to the rise of a “group-shared system of thought” which has come to offer many people both “a frame of orientation and an object of devotion.” Pope Francis warns his readers against giving fealty and adoration to false gods – gods which are as appealing to atheists as they are to old-time-believers. And while Laudato Si’ is not a document that seeks (not significantly, at least) to draw people into the Catholic church, it is a document that warns people against the religion of technology. After all, we cannot return to the Garden of Eden by biting into an Apple product.

    It is worth recognizing, that there are many reasons why the religion of technology so easily wins converts. The world is a mess and the news reports are filled with a steady flow of horrors – the dangers of environmental degradation seem to grow starker by the day, as scientists issue increasingly dire predictions that we may have already passed the point at which we needed to act. Yet, one of the few areas that continually operates as a site of unbounded optimism is the missives fired off by the technology sector and its boosters. Wearable technology, self-driving cars, the Internet of Things, delivery drones, artificial intelligence, virtual reality – technology provides a vision of the future that is not fixated on rising sea levels and extinction. Indeed, against the backdrop of extinction some even predict that through the power of techno-science humans may not be far off from being able to bring back species that had previously gone extinct.

    Technology has become a site of millions of minor miracles that have drawn legions of adherents to the technological god and its sainted corporations – and while technology has been a force present with humans for nearly as long as there have been humans, technology today seems increasingly to be presented in a way that encourages people to bask in its uncanny glow. Contemporary technology – especially of the Internet connected variety – promises individuals that they will never be alone, that they will never be bored, that they will never get lost, and that they will never have a question for which they cannot execute a web search and find an answer. If older religions spoke of a god who was always watching, and always with the believer, than the smart phone replicates and reifies these beliefs – for it is always watching, and it is always with the believer. To return to Fromm’s description of religion it should be fairly apparent that technology today provides people with “a frame of orientation and an object of devotion.” It is thus not simply that technology comes to be presented as a solution to present problems, but that technology comes to be presented as a form of salvation from all problems. Why pray if “there’s an app for that”?

    In Laudato Si’, Pope Francis warns against this new religion by observing:

    “Life gradually becomes a surrender to situations conditioned by technology, itself viewed as the principle key to the meaning of existence.” (Francis, no. 110)

    Granted, the question should be asked as to what is “the meaning of existence” supplied by contemporary technology? The various denominations of the religion of technology are skilled at offering appealing answers to this question filled with carefully tested slogans about making the world “more open and connected.” What the religion of technology continually offers is not so much a way of being in the world as a way of escaping from the world. Without mincing words, the world described in Laudato Si’ is rather distressing: it is a world of vast economic inequality, rising sea levels, misery, existential uncertainty, mountains of filth discarded by affluent nations (including e-waste), and the prospects are grim. By comparison the religion of technology provides a shiny vision of the future, with the promise of escape from earthly concerns through virtual reality, delivery on demand, and the truly transcendent dream of becoming one with machines. The religion of technology is not concerned with the next life, or with the lives of future generations, it is about constructing a new Eden in the now, for those who can afford the right toys. Even if constructing this heaven consigns much of the world’s population to hell. People may not be bending their necks in prayer, but they’re certainly bending their necks to glance at their smart phones. As David Noble wrote:

    “A thousand years in the making, the religion of technology has become the common enchantment, not only of the designers of technology but also of those caught up in, and undone by, their godly designs. The expectation of ultimate salvation through technology, whatever the immediate human and social costs, has become the unspoken orthodoxy, reinforced by a market-induced enthusiasm for novelty and sanctioned by millenarian yearnings for new beginnings. This popular faith, subliminally indulged and intensified by corporate, government, and media pitchmen, inspires an awed deference to the practitioners and their promises of deliverance while diverting attention from more urgent concerns.” (Noble, 207)

    Against this religious embrace of technology, and the elevation of its evangels, Laudato Si’ puts forth a reminder that one can, and should, appreciate the tools which have been invented – but one should not worship them. To return to Erich Fromm:

    “The question is not one of religion or not? but of which kind of religion? – whether it is one that furthers human development, the unfolding of specifically human powers, or one that paralyzes human growth…our religious character may be considered an aspect of our character structure, for we are what we are devoted to, and what we are devoted to is what motivates our conduct. Often, however, individuals are not even aware of the real objects of their personal devotion and mistake their ‘official’ beliefs for their real, though secret religion.” (Fromm, 135-136)

    It is evident that Pope Francis considers the worship of technology to be a significant barrier to further “human development” as it “paralyzes human growth.” Technology is not the only false religion against which the encyclical warns – the cult of self worship, unbridled capitalism, the glorification of violence, and the revival tent of consumerism are all considered as false faiths. They draw adherents in by proffering salvation and prescribing a simple course of action – but instead of allowing their faithful true transcendence they instead warp their followers into sycophants.

    Yet the particularly nefarious aspect of the religion of technology, in line with the quotation from Fromm, is the way in which it is a faith to which many subscribe without their necessarily being aware of it. This is particularly significant in the way that it links to the encyclical’s larger concern with the environment and with the poor. Those in affluent nations who enjoy the pleasures of high-tech lifestyles – the faithful in the religion of technology – are largely spared the serious downsides of high-technology. Sure, individuals may complain of aching necks, sore thumbs, difficulty sleeping, and a creeping sense of dissatisfaction – but such issues do not tell of the true cost of technology. What often goes unseen by those enjoying their smart phones are the exploitative regimes of mineral extraction, the harsh labor conditions where devices are assembled, and the toxic wreckage of e-waste dumps. Furthermore, insofar as high-tech devices (and the cloud) require large amounts of energy it is worth considering the degree to which high-tech lifestyles contribute to the voracious energy consumption that helps drive climate change. Granted, those who suffer from these technological downsides are generally not the people enjoying the technological devices.

    And though Laudato Si’ may have a particular view of salvation – one need not subscribe to that religion to recognize that the religion of technology is not the faith of the solution.

    But the faith of the problem.

    3. Laudato Si’ as Critique of Technology

    Relatively early in the encyclical, Pope Francis decries how, against the background of “media and the digital world”:

    “the great sages of the past run the risk of going unheard amid the noise and distractions of an information overload.” (Frances, no. 47)

    Reading through Laudato Si’ it becomes fairly apparent who Pope Francis considers many of these “great sages” to be. For the most part Pope Francis cites the encyclicals of his predecessors, declarations from Bishops’ conferences, the bible, and theologians who are safely ensconced in the Church’s wheelhouse. While such citations certainly help to establish that the ideas being put forth in Laudato Si’ have been circulating in the Catholic Church for some time – Pope Francis’s invocation of “great sages of the past…going unheard” raises a larger question. How much of the encyclical is truly new and how much is a reiteration of older ideas that have gone “unheard?” In fairness, the social critique being advanced by Laudato Si’ may strike many people as novel – particularly in terms of its ethically combative willingness to take on technology – but it may be that the significant thing about Laudato Si’ is not that the message is new, but that the messenger is new. Without wanting to decry or denigrate Laudato Si’ it is worth noting that much of the argument being presented in the document could previously be found in works by thinkers associated with the critique of technology, notably Lewis Mumford and Jacques Ellul. Indeed, the following statement, from Lewis Mumford’s Art and Technics, could have appeared in Laudato Si’ without seeming out of place:

    “We overvalue the technical instrument: the machine has become our main source of magic, and it has given us a false sense of possessing godlike powers. An age that has devaluated all its symbols has turned the machine itself into a universal symbol: a god to be worshiped.” (Mumford, 138 – Art and Technics)

    The critique of technology does not represent a cohesive school of thought – rather it is a tendency within several fields (history and philosophy of technology, STS, media ecology, critical theory) that places particular emphasis on the negative impacts of technology. What many of these thinkers emphasized was the way in which the choices of certain technologies over others winds up having profound impacts upon the shape of a society. Thus, within the critique of technology, it is not a matter of anything so ridiculously reductive as “technology is bad” but of considering what alternative forms technology could take: “democratic technics” (Mumford), “convivial tools” (Illich), “appropriate technology” (Schumacher), “liberatory technology” (Bookchin), and so forth. Yet what is particularly important is the fact that the serious critique of technology was directly tied to a critique of the broader society. And thus, Mumford also wrote extensively about urban planning, architecture and cities – while Ellul wrote as much (perhaps more) about theological issues (Ellul was a devout individual who described himself as a Christian anarchist).

    With the rise of ever more powerful and potentially catastrophic technological systems, many thinkers associated with the critique of technology began issuing dire warnings about the techno-science wrought danger in which humanity had placed itself. With the appearance of the atomic bomb, humanity had invented the way to potentially bring an end to the whole of the human project. Galled by the way in which technology seemed to be drawing ever more power to itself, Ellul warned of the ascendance of “technique” while Mumford cautioned of the emergence of “the megamachine” with such terms being used to denote not simply technology and machinery but the fusion of techno-science with social, economic and political power – though Pope Francis seems to prefer to use the term “technological paradigm” or “technocratic paradigm” instead of “megamachine.” When Pope Francis writes:

    “The technological paradigm has become so dominant that it would be difficult to do without its resources and even more difficult to utilize them without being dominated by their internal logic.” (Francis, no. 108)

    Or:

    “the new power structures based on the techno-economic paradigm may overwhelm not only our politics but also freedom and justice.” (Francis, no. 53)

    Or:

    “The alliance between the economy and technology ends up sidelining anything unrelated to its immediate interests.” (Francis, no. 54)

    These are comments that are squarely in line with Ellul’s comment that:

    Technical civilization means that our civilization is constructed by technique (makes a part of civilization only that which belongs to technique), for technique (in that everything in this civilization must serve a technical end), and is exclusively technique (in that it excludes whatever is not technique or reduces it to technical forms).” (Ellul, 128 – italics in original)

    A particular sign of the growing dominance of technology, and the techno-utopian thinking that everywhere evangelizes for technology, is the belief that to every problem there is a technological solution. Such wishful thinking about technology as the universal panacea was a tendency highly criticized by thinkers like Mumford and Ellul. Pope Francis chastises the prevalence of this belief at several points, writing:

    “Obstructionist attitudes, even on the part of believers, can range from denial of the problem to indifference, nonchalant resignation or blind confidence in technical solutions.” (Francis, no. 14)

    And the encyclical returns to this, decrying:

    “Technology, which, linked to business interests, is presented as the only way of solving these problems,” (Francis, no. 20)

    There is more than a passing similarity between the above two quotations from Pope Francis’s 2015 encyclical and the following quotation from Lewis Mumford’s book Technics and Civilization (first published in 1934):

    “But the belief that the social dilemmas created by the machine can be solved merely by inventing more machines is today a sign of half-baked thinking which verges close to quackery.” (Mumford, 367)

    At the very least this juxtaposition should help establish that there is nothing new about those in power proclaiming that technology will solve everything, but just the same there is nothing particularly new about forcefully criticizing this unblinking faith in technological solutions. If one wanted to do so it would not be an overly difficult task to comb through Laudato Si’ – particularly “Chapter Three: The Human Roots of the Ecological Crisis” – and find a couple of paragraphs by Mumford, Ellul or another prominent critic of technology in which precisely the same thing is being said. After all, if one were to try to capture the essence of the critique of technology in two sentences, one could do significantly worse than the following lines from Laudato Si’:

    “We have to accept that technological products are not neutral, for they create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups. Decisions which may seem purely instrumental are in reality decisions about the kind of society we want to build.” (Francis, no. 107)

    Granted, the line “technological products are not neutral” may have come as something of a disquieting statement to some readers of Laudato Si’ even if it has long been understood by historians of technology. Nevertheless, it is the emphasis placed on the matter of “the kind of society we want to build” that is of particular importance. For the encyclical does not simply lament the state of the technological world, it advances an alternative vision of technology – one which recognizes the tremendous potential of technological advances but sees how this potential goes unfulfilled. Laudato Si’ is a document which is skeptical of the belief that smart phones have made people happier, and it is a text which shows a clear unwillingness to believe that large tech companies are driven by much other than their own interests. The encyclical bears the mark of a writer who believes in a powerful God and that deity’s prophets, but has little time for would-be all powerful corporations and their lust for profits. One of the themes that ran continuously throughout Lewis Mumford’s work was his belief that the “good life” had been overshadowed by the pursuit of the “goods life” – and a similar theme runs through Laudato Si’ wherein the analysis of climate change, the environment, and what is owed to the poor, is couched in a call to reinvigorate the “good life” while recognizing that the “goods life” is a farce. Despite the power of the “technological paradigm,” Pope Francis remains hopeful regarding the power of people, writing:

    “We have the freedom needed to limit and direct technology; we can put it at the service of another type of progress, one which is healthier, more human, more social, more integral. Liberation from the dominant technocratic paradigm does in fact happen sometimes, for example, when cooperatives of small producers adopt less polluting methods of production, and opt for a non-consumerist model of life, recreation and community. Or when technology is directed primarily to resolving people’s concrete problems, truly helping them live with more dignity and less suffering.” (Francis, no. 112)

    In the above quotation, what Pope Francis is arguing for is the need for, to use Mumford’s terminology, “democratic technics” to replace “authoritarian technics.” Or, to use Ivan Illich’s terms (and Illich was himself a Catholic priest) the emergence of a “convivial society” centered around “convivial tools.” Granted, as is perhaps not particularly surprising for a call to action, Pope Francis tends to be rather optimistic about the prospects individuals have for limiting and directing technology. For, one of the great fears shared amongst numerous critics of technology was the belief that the concentration of power in “technique” or “the megamachine” or the “technological paradigm” gradually eliminated the freedom to limit or direct it. That potential alternatives emerged was clear, but such paths were quickly incorporated back into the “technological paradigm.” As Ellul observed:

    “To be in technical equilibrium, man cannot live by any but the technical reality, and he cannot escape from the social aspect of things which technique designs for him. And the more his needs are accounted for, the more he is integrated into the technical matrix.” (Ellul, 224)

    In other words, “technique” gradually eliminates the alternatives to itself. To live in a society shaped by such forces requires an individual to submit to those forces as well. What Laudato Si’ almost desperately seeks to claim, to the contrary, is that it is not too late, that people still have the ability “to limit and direct technology” provided they tear themselves away from their high-tech hallucinations. And this earnest belief is the hopeful core of the encyclical.

    Ethically impassioned books and articles decrying what a high consumption lifestyle wreaks upon the planet and which exhort people to think of those who do not share in the thrill of technological decadence are not difficult to come by. And thus, the aspect of Laudato Si’ which may be the most radical and the most striking are the sections devoted to technology. For what the encyclical does so impressively is that it expressly links environmental destruction and the neglect of the poor with the religious allegiance to high-tech devices. Numerous books and articles appear on a regular basis lamenting the current state of the technological world – and yet too often the authors of such texts seem terrified of being labeled “anti-technology.” Therefore, the authors tie themselves in knots trying to stake out a position that is not evangelizing for technology but at the same time they refuse to become heretics to the religion of technology – and as a result they easily become the permitted voices of dissent who only seem to empower the evangels as they conduct the debate on the terms of technological society. They try to reform the religion of technology instead of recognizing that it is a faith premised upon worshiping a false god. After all, one is permitted to say that Google is getting too big, that the Apple Watch is unnecessary, and that Internet should be called “the surveillance mall” – but to say:

    “There is a growing awareness that scientific and technological progress cannot be equated with the progress of humanity and history, a growing sense that the way to a better future lies elsewhere.” (Francis, no. 113)

    Well…one rarely hears such arguments today, precisely because the dominant ideology of our day places ample faith in equating “scientific and technological progress” with progress, as such. Granted, that was the type of argument being made by the likes of Mumford and Ellul – though the present predicament makes it woefully evident that too few heeded their warnings. Indeed a leitmotif that can be detected amongst the works of many critics of technology is a desire to be proved wrong, as Mumford wrote:

    “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy.” (Mumford, 528 – My Works and Days)

    Yet to read over Mumford’s predictions in the present day is to understand why those words are not carved into his tombstone – for Mumford was not an “absolute fool,” he was acutely prescient. Though, alas, the likes of Mumford and Ellul too easily number amongst the ranks of “the great sages of the past” who, in Pope Francis’s words, “run the risk of going unheard amid the noise and distractions of an information overload.”

    Despite the issues that various individuals will certainly have with Laudato Si’ – ranging from its stance towards women to its religious tonality – the element that is likely to disquiet the largest group is its serious critique of technology. Thus, it is somewhat amusing to consider the number of articles that have been penned about the encyclical which focus on the warnings about climate change but say little about Pope Francis’s comments about the danger of the “technological paradigm.” For the encyclical commits a profound act of heresy against the contemporary religion of technology – it dares to suggest that we have fallen for the PR spin about the devices in our pockets, it asks us to consider if these devices are truly filling an existential void or if they are simply distracting us from having to think about this absence, and the encyclical reminds us that we need not be passive consumers of technology. These arguments about technology are not new, and it is not new to make them in ethically rich or religiously loaded language; however, these are arguments which are verboten in contemporary discourse about technology. Alas, those who make such claims are regularly derided as “Luddites” or “NIMBYs” and banished to the fringes. And yet the historic Luddites were simply workers who felt they had the freedom “to limit and direct technology,” and as anybody who knows about e-waste can attest when people in affluent nations say “Not In My Back Yard” the toxic refuse simply winds up in somebody else’s back yard. Pope Francis writes that today:

    “It has become countercultural to choose a lifestyle whose goals are even partly independent of technology, of its costs and its power to globalize and make us all the same.” (Francis, no. 108)

    And yet, what Laudato Si’ may represent is an important turning point in discussions around technology, and a vital opportunity for a serious critique of technology to reemerge. For what Laudato Si’ does is advocate for a new cultural paradigm based upon harnessing technology as a tool instead of as an absolute. Furthermore, the inclusion of such a serious critique of technology in a widely discussed (and hopefully widely read) encyclical represents a point at which rigorously critiquing technology may be able to become less “countercultural.” Laudato Si’ is a profoundly pro-culture document insofar as it seeks to preserve human culture from being destroyed by the greed that is ruining the planet. It is a rare text that has the audacity to state: “you do not need that, and your desire for it is bad for you and bad for the planet.”

    Laudato Si’ is a piece of fierce social criticism, and like numerous works from the critique of technology, it is a text that recognizes that one cannot truly claim to critique a society without being willing to turn an equally critical gaze towards the way that society creates and uses technology. The critique of technology is not new, but it has been sorely underrepresented in contemporary thinking around technology. It has been cast as the province of outdated doom mongers, but as Pope Francis demonstrates, the critique of technology remains as vital and timely as ever.

    Too often of late discussions about technology are conducted through rose colored glasses, or worse virtual reality headsets – Laudato Si’ dares to actually look at technology.

    And to demand that others do the same.

    4. The Bright Mountain

    The end of the world is easy.

    All it requires of us is that we do nothing, and what can be simpler than doing nothing? Besides, popular culture has made us quite comfortable with the imagery of dystopian states and collapsing cities. And yet the question to ask of every piece of dystopian fiction is “what did the world that paved the way for this terrible one look like?” To which the follow up question should be: “did it look just like ours?” And to this, yet another follow up question needs to be asked: “why didn’t people do something?” In a book bearing the uplifting title The Collapse of Western Civilization Naomi Oreskes and Erik Conway analyze present inaction as if from the future, and write:

    “the people of Western civilization knew what was happening to them but were unable to stop it. Indeed, the most startling aspect of this story is just how much these people knew, and how unable they were to act upon what they knew.” (Oreskes and Conway, 1-2)

    This speaks to the fatalistic belief that despite what we know, things are not going to change, or that if change comes it will already be too late. One of the most interesting texts to emerge in recent years in the context of continually ignored environmental warnings is a slim volume titled Uncivilisation: The Dark Mountain Manifesto. It is the foundational text of a group of writers, artists, activists, and others that dares to take seriously the notion that we are not going to change in time. As the manifesto’s authors write:

    “Secretly, we all think we are doomed: even the politicians think this; even the environmentalists. Some of us deal with it by going shopping. Some deal with it by hoping it is true. Some give up in despair. Some work frantically to try and fend off the coming storm.” (Hine and Kingsnorth, 9)

    But the point is that change is coming – whether we believe it or not, and whether we want it or not. But what is one to do? The desire to retreat from the cacophony of modern society is nothing new and can easily sow the fields in which reactionary ideologies can grow. Particularly problematic is that the rejection of the modern world often entails a sleight of hand whereby those in affluent nations are able to shirk their responsibility to the world’s poor even as they walk somberly, flagellating themselves into the foothills. Apocalyptic romanticism, whether it be of the accelerationist or primitivist variety, paints an evocative image of the world of today collapsing so that a new world can emerge – but what Laudato Si’ counters with is a morally impassioned cry to think of the billions of people who will suffer and die. Think of those for whom fleeing to the foothills is not an option. We do not need to take up residence in the woods like latter day hermetic acolytes of Francis of Assisi, rather we need to take that spirit and live it wherever we find ourselves.

    True, the easy retort to the claim “secretly, we all think we are doomed” is to retort “I do not think we are doomed, secretly or openly” – but to read climatologists predictions and then to watch politicians grouse, whilst mining companies seek to extract even more fossil fuels is to hear that “secret” voice grow louder. People have always been predicting the end of the world, and here we still are, which leads many to simply shrug off dire concerns. Furthermore, many worry that putting too much emphasis on woebegone premonitions overwhelms people and leaves them unable and unwilling to act. Perhaps this is why Al Gore’s film An Inconvenient Truth concludes not by telling people they must be willing to fundamentally alter their high-tech/high-consumption lifestyles but instead simply tells them to recycle. In Laudato Si’ Pope Francis writes:

    “Doomsday predictions can no longer be met with irony or disdain. We may well be leaving to coming generations debris, desolation and filth.” (Francis, no. 161)

    Those lines, particularly the first of the two, should be the twenty-first century replacement for “Keep Calm and Carry On.” For what Laudato Si’ makes clear is that now is not the time to “Keep Calm” but to get very busy, and it is a text that knows that if we “Carry On” than we are skipping aimlessly towards the cliff’s edge. And yet one of the elements of the encyclical that needs to be highlighted is that it is a document that does not look hopefully towards a coming apocalypse. In the encyclical, environmental collapse is not seen as evidence that biblical preconditions for Armageddon are being fulfilled. The sorry state of the planet is not the result of God’s plan but is instead the result of humanity’s inability to plan. The problem is not evil, for as Simone Weil wrote:

    “It is not good which evil violates, for good is inviolate: only a degraded good can be violated.” (Weil, 70 – Gravity and Grace)

    It is that the good of which people are capable is rarely the good which people achieve. Even as possible tools for building the good life – such as technology – are degraded and mistaken for the good life. And thus the good is wasted, though it has not been destroyed.

    Throughout Laudato Si’, Pope Francis praises the merits of an ascetic life. And though the encyclical features numerous references to Saint Francis of Assisi, the argument is not that we must all abandon our homes to seek out new sanctuary in nature, instead the need is to learn from the sense of love and wonder with which Saint Francis approached nature. Complete withdrawal is not an option, to do so would be to shirk our responsibility – we live in this world and we bear responsibility for it and for other people. In the encyclical’s estimation, those living in affluent nations cannot seek to quietly slip from the scene, nor can they claim they are doing enough by bringing their own bags to the grocery store. Rather, responsibility entails recognizing that the lifestyles of affluent nations have helped sow misery in many parts of the world – it is unethical for us to try to save our own cities without realizing the part we have played in ruining the cities of others.

    Pope Francis writes – and here an entire section shall be quoted:

    “Many things have to change course, but it is we human beings above all who need to change. We lack an awareness of our common origin, of our mutual belonging, and of a future to be shared with everyone. This basic awareness would enable the development of new conviction, attitudes and forms of life. A great cultural, spiritual and educational challenge stands before us, and it will demand that we set out on the long path of renewal.” (Francis, no. 202)

    Laudato Si’ does not suggest that we can escape from our problems, that we can withdraw, or that we can “keep calm and carry on.” And though the encyclical is not a manifesto, if it were one it could possibly be called “The Bright Mountain Manifesto.” For what Laudato Si’ reminds its readers time and time again is that even though we face great challenges it remains within our power to address them, though we must act soon and decisively if we are to effect a change. We do not need to wander towards a mystery shrouded mountain in the distance, but work to make the peaks near us glisten – it is not a matter of retreating from the world but of rebuilding it in a way that provides for all. Nobody needs to go hungry, our cities can be beautiful, our lifestyles can be fulfilling, our tools can be made to serve us as opposed to our being made to serve tools, people can recognize the immense debt they owe to each other – and working together we can make this a better world.

    Doing so will be difficult. It will require significant changes.

    But Laudato Si’ is a document that believes people can still accomplish this.

    In the end Laudato Si’ is less about having faith in god, than it is about having faith in people.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, on which an earlier version of this post first appeared. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Pope Francis. Encyclical Letter Laudato Si’ of the Holy Father Francis on Care For Our Common Home. Vatican Press, 2015. [Note – the numbers ins all citations from this document refer to the section number, not the page number]

    Ellul, Jacques. The Technological Society. Vintage Books, 1964.

    Fromm, Erich. To Be and To Have. Harper & Row, Publishers, 1976.

    Hine, Dougald and Kingsnorth, Paul. Uncivilization: The Dark Mountain Manifesto. The Dark Mountain Project, 2013.

    Mumford, Lewis. My Works and Days: A Personal Chronicle. Harcourt, Brace, Jovanovich, 1979.

    Mumford, Lewis. Art and Technics. Columbia University Press, 2000.

    Mumford, Lewis. Technics and Civilization. University of Chicago Press, 2010.

    Noble, David. The Religion of Technology. Penguin, 1999.

    Oreskes, Naomi and Conway, Erik M. The Collapse of Western Civilization: A View from the Future. Columbia University Press, 2014.

    Weil, Simone. The Need for Roots. Routledge Classics, 2002.

    Weil, Simone. Gravity and Grace. Routledge Classics, 2002. (the quote at the beginning of this piece is found on page 139 of this book)

  • Is the Network a Brain?

    Is the Network a Brain?

    Pickering, Cybernetic Braina review of Andrew Pickering, The Cybernetic Brain: Sketches of Another Future (University of Chicago Press, 2011)
    by Jonathan Goodwin
    ~

    Evgeny Morozov’s recent New Yorker article about Project Cybersyn in Allende’s Chile caused some controversy when critics accused Morozov of not fully acknowledging his sources. One of those sources was sociologist of science Andrew Pickering’s The Cybernetic Brain. Morozov is quoted as finding Pickering’s book “awful.” It’s unlikely that Morozov meant “awful” in the sense of “awe-inspiring,” but that was closer to my reaction after reading Pickering’s 500+ pp. work on the British tradition in cybernetics. This tradition was less militarist and more artistic, among other qualities, in Pickering’s account, than is popularly understood. I found myself greatly intrigued—if not awed—by the alternate future that his subtitle and final chapter announces. Cybernetics is now a largely forgotten dead-end in science. And the British tradition that Pickering describes had relatively little influence within cybernetics itself. So what is important about it now, and what is the nature of this other future that Pickering sketches?

    The major figures of this book, which proceeds with overviews of their careers, views, and accomplishments, are Grey Walter, Ross Ashby, Gregory Bateson, R. D. Laing, Stafford Beer, and Gordon Pask. Stuart Kauffman’s and Stephen Wolfram’s work on complexity theory also makes an appearance.[1] Laing and Bateson’s relevance may not be immediately clear. Pickering’s interest in them derives from their extension of cybernetic ideas to the emerging technologies of the self in the 1960s. Both Bateson and Laing approached schizophrenia as an adaptation to the increasing “double-binds” of Western culture, and both looked to Eastern spiritual traditions and chemical methods of consciousness-alteration as potential treatments. The Bateson and Laing material makes the most direct reference to the connection between the cybernetic tradition and the “Californian Ideology” that animates much Silicon Valley thinking. Stewart Brand was influenced by Bateson’s Steps to an Ecology of Mind (183), for example. Pickering identifies Northern California as the site where cybernetics migrated into the counterculture. As a technology of control, it is arguable that this countercultural migration has become part of the ruling ideology of the present moment. Pickering recognizes this but seems to concede that the inherent topicality would detract from the focus of his work. It is a facet that would be of interest to the readers of this “Digital Studies” section of The b2 Review, however, and I will thus return to it at the end of this review.

    Pickering’s path to Bateson and Laing originates with Grey Walter’s and Ross Ashby’s pursuit of cybernetic models of the brain. Computational models of the brain, though originally informed by cybernetic research, quickly replaced it in Pickering’s account (62). He asks why computational models of the brain quickly gathered so much cultural interest. Rodney Brooks’s robots, with their more embodied approach, Pickering argues, are in the tradition of Walter’s tortoises and outside the symbolic tradition of artificial intelligence. I find it noteworthy that the neurological underpinnings of early cybernetics were so strongly influenced by behaviorism. Computationalist approaches, associated by Pickering with the establishment or “royal” science, here, were intellectually formed by an attack on behaviorism. Pickering even addresses this point obliquely, when he wonders why literary scholars had not noticed that the octopus in Gravity’s Rainbow was apparently named “Grigori” in homage to Gregory Bateson (439n13).[2] I think one reason this hasn’t been noticed is that it’s much more likely that the name was random but for its Slavic form, which is clearly in the same pattern of references to Russian behaviorist psychology that informs Pynchon’s novel. An offshoot of behaviorism inspiring a countercultural movement devoted to freedom and experimentation seems peculiar.

    One of Pickering’s key insights into this alternate tradition of cybernetics is that its science is performative. Rather than being as theory-laden as are the strictly computationalist approaches, cybernetic science often studied complex systems as assemblages whose interactions generated novel insights. Contrast this epistemology to what critics point to as the frequent invocation of the Duhem-Quine thesis by Noam Chomsky.[3] For Pickering, Ross Ashby’s version of cybernetics was a “supremely general and protean science” (147). As it developed, the brain lost its central place and cybernetics became a “freestanding general science” (147). As I mentioned, the chapter on Ashby closes with a consideration of the complexity science of Stuart Kauffman and Stephen Wolfram. That Kauffman and Wolfram largely have worked outside mainstream academic institutions is important for Pickering.[4] Christopher Alexander’s pattern language in architecture is a third example. Pickering mentions that Alexander’s concept was influential in some areas of computer science; the notion of “object-oriented programming” is sometimes considered to have been influenced by Alexander’s ideas.

    I mention this connection because many of the alternate traditions in cybernetics have become mainstream influences in contemporary digital culture. It is difficult to imagine Laing and Bateson’s alternative therapeutic ideas having any resonance in that culture, however. The doctrine that “selves are endlessly complex and endlessly explorable” (211) is sometimes proposed as something the internet facilitates, but the inevitable result of anonymity and pseudonymity in internet discourse is the enframing of hierarchical relations. I realize this point may sound controversial to those with a more benign or optimistic view of digital culture. That this countercultural strand of cybernetic practice has clear parallels with much digital libertarian rhetoric is hard to dispute. Again, Pickering is not concerned in the book with tracing these contemporary parallels. I mention them because of my own interest and this venue’s presumed interest in the subject.

    The progression that begins with some variety of conventional rationalism, extends through a career in cybernetics, and ends in some variety of mysticism is seen with almost all of the figures that Pickering profiles in The Cybernetic Brain. Perhaps the clearest example—and most fascinating in general—is that of Stafford Beer. Philip Mirowski’s review of Pickering’s book refers to Beer as “a slightly wackier Herbert Simon.” Pickering enjoys recounting the adventures of the wizard of Prang, a work that Beer composed after he had moved to a remote Welsh village and renounced many of the world’s pleasures. Beer’s involvement in Project Cybersyn makes him perhaps the most well-known of the figures profiled in this book.[5] What perhaps fascinate Pickering more than anything else in Beer’s work is the concept of viability. From early in his career, Beer advocated for upwardly viable management strategies. The firm would not need a brain, in his model, “it would react to changing circumstances; it would grow and evolve like an organism or species, all without any human intervention at all” (225). Mirowski’s review compares Beer to Friedrich Hayek and accuses Pickering of refusing to engage with this seemingly obvious intellectual affinity.[6] Beer’s intuitions in this area led him to experiment with biological and ecological computing; Pickering surmises that Douglas Adams’s superintelligent mice derived from Beer’s murine experiments in this area (241).

    In a review of a recent translation of Stanislaw Lem’s Summa Technologiae, Pickering mentions that natural adaptive systems being like brains and being able to be utilized for intelligence amplification is the most “amazing idea in the history of cybernetics” (247).[7] Despite its association with the dreaded “synergy” (the original “syn” of Project Cybersyn), Beer’s viable system model never became a management fad (256). Alexander Galloway has recently written here about the “reticular fallacy,” the notion that de-centralized forms of organization are necessarily less repressive than are centralized or hierachical forms. Beer’s viable system model proposes an emergent and non-hierarchical management system that would increase the general “eudemony” (general well-being, another of Beer’s not-quite original neologisms [272]). Beer’s turn towards Tantric mysticism seems somehow inevitable in Pickering’s narrative of his career. The syntegric icosahedron, one of Beer’s late baroque flourishes, reminded me quite a bit of a Paul Laffoley painting. Syntegration as a concept takes reticularity to a level of mysticism rarely achieved by digital utopians. Pickering concludes the chapter on Beer with a discussion of his influence on Brian Eno’s ambient music.

    Laffoley, "The Orgone Motor"
    Paul Laffoley, “The Orgone Motor” (1981). Image source: paullaffoley.net.

    The discussion of Eno chides him for not reading Gordon Pask’s explicitly aesthetic cybernetics (308). Pask is the final cybernetician of Pickering’s study and perhaps the most eccentric. Pickering describes him as a model for Patrick Troughton’s Dr. Who (475n3), and his synaesthetic work in cybernetics with projects like the Musicolor are explicitly theatrical. A theatrical performance that directly incorporates audience feedback into the production, not just at the level of applause or hiss, but in audience interest in a particular character—a kind of choose-your-own adventure theater—was planned with Joan Littlewood (348-49). Pask’s work in interface design has been identified as an influence on hypertext (464n17). A great deal of the chapter on Pask involves his influence on British countercultural arts and architecture movements in the 1960s. Mirowski’s review shortly notes that even the anti-establishment Gordon Pask was funded by the Office of Naval Research for fifteen years (194). Mirowski also accuses Pickering of ignoring the computer as the emblematic cultural artifact of the cybernetic worldview (195). Pask is the strongest example offered of an alternate future of computation and social organization, but it is difficult to imagine his cybernetic present.

    The final chapter of Pickering’s book is entitled “Sketches of Another Future.” What is called “maker culture” combined with the “internet of things” might lead some prognosticators to imagine an increasingly cybernetic digital future. Cybernetic, that is, not in the sense of increasing what Mirowski refers to as the neoliberal “background noise of modern culture” but as a “challenge to the hegemony of modernity” (393). Before reading Pickering’s book, I would have regarded such a prediction with skepticism. I still do, but Pickering has argued that an alternate—and more optimistic—perspective is worth taking seriously.

    _____

    Jonathan Goodwin is Associate Professor of English at the University of Louisiana, Lafayette. He is working on a book about cultural representations of statistics and probability in the twentieth century.

    Back to the essay

    _____

    [1] Wolfram was born in England, though he has lived in the United States since the 1970s. Pickering taught at the University of Illinois while this book was being written, and he mentions having several interviews with Wolfram, whose company Wolfram Research is based in Champaign, Illinois (457n73). Pickering’s discussion of Wolfram’s A New Kind of Science is largely neutral; for a more skeptical view, see Cosma Shalizi’s review.

    [2] Bateson experimented with octopuses, as Pickering describes. Whether Pynchon knew about this, however, remains doubtful. Pickering’s note may also be somewhat facetious.

    [3] See the interview with George Lakoff in Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates, ed. Geoffrey J. Huck and John A. Goldsmith (New York: Routledge, 1995), p. 115. Lakoff’s account of Chomsky’s philosophical justification for his linguistic theories is tendentious; I mention it here because of the strong contrast, even in caricature, with the performative quality of the cybernetic research Pickering describes. (1999).

    [4] Though it is difficult to think of the Santa Fe Institute this way now.

    [5] For a detailed cultural history of Project Cybersyn, see Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (MIT Press, 2011). Medina notes that Beer formed the word “algedonic” from two words meaning “pain” and “pleasure,” but the OED notes an example in the same sense from 1894. This citation does not rule out independent coinage, of course. Curiously enough, John Fowles uses the term in The Magus (1966), where it could have easily been derived from Beer.

    [6] Hayek’s name appears neither in the index nor the reference list. It does seem a curious omission in the broader intellectual context of cybernetics.

    [7] Though there is a reference to Lem’s fiction in an endnote (427n25), Summa Technologiae, a visionary exploration of cybernetic philosophy dating from the early 1960s, does not appear in Pickering’s work. A complete English translation only recently appeared, and I know of no evidence that Pickering’s principal figures were influenced by Lem at all. The book, as Pickering’s review acknowledges, is astonishingly prescient and highly recommended for anyone interested in the culture of cybernetics.

  • What Drives Automation?

    What Drives Automation?

    glass-cagea review of Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
    by Mike Bulajewski
    ~

    Debates about digital technology are often presented in terms of stark polar opposites: on one side, cyber-utopians who champion the new and the cutting edge, and on the other, cyber-skeptics who hold on to obsolete technology. The framing is one-dimensional in the general sense that it is superficial, but also in a more precise and mathematical sense that it implicitly treats the development of technology as linear. Relative to the present, there are only two possible positions and two possible directions to move; one can be either for or against, ahead or behind.[1]

    Although often invoked as a prelude to transcending the division and offering a balanced assessment, in describing the dispute in these pro or con terms one has already betrayed one’s orientation, tilting the field against the critical voice by assigning it an untenable position. Criticizing a new technology is misconstrued as a simple defense of the old technology or of no technology, which turns legitimate criticism into mere conservative fustiness, a refusal to adapt and a failure to accept change.

    Few critics of technology match these descriptions, and those who do, like the anarcho-primitivists who claim to be horrified by contemporary technology, nonetheless accede to the basic framework set by technological apologists. The two sides disagree only on the preferred direction of travel, making this brand of criticism more pro-technology than it first appears. One should not forget that the high-tech futurism of Silicon Valley is supplemented by the balancing counterweight of countercultural primitivism, with Burning Man expeditions, technology-free Waldorf schools for children of tech workers, spouses who embrace premodern New Age beliefs, romantic agrarianism, and restorative digital detox retreats featuring yoga and meditation. The diametric opposition between pro- and anti-technology is internal to the technology industry, perhaps a symptom of the repression of genuine debate about the merits of its products.

    ***

    Nicholas Carr’s most recent book, The Glass Cage: Automation and Us, is a critique of the use of automation and a warning of its human consequences, but to conclude, as some reviewers have, that he is against automation or against technology as such is to fall prey to this one-dimensional fallacy.[2]

    The book considers the use of automation in areas like medicine, architecture, finance, manufacturing and law, but it begins with an example that’s closer to home for most of us: driving a car. Transportation and wayfinding are minor themes throughout the book, and with Google and large automobile manufacturers promising to put self-driving cars on the street within a decade, the impact of automation in this area may soon be felt in our daily lives like never before. Early in the book, we are introduced to problems that human factors engineers working with airline autopilot systems have discovered and may be forewarnings of a future of the unchecked automating of transportation.

    Carr discusses automation bias—the tendency for operators to assume the system is correct and external signals that contradict it are wrong—and the closely related problem of automation complacency, which occurs when operators assume the system is infallible and so abandon their supervisory role. These problems have been linked to major air disasters and are behind less-catastrophic events like oblivious drivers blindly following their navigation systems into nearby lakes or down flights of stairs.

    The chapter dedicated to deskilling is certain to raise the ire of skeptical readers because it begins with an account of the negative impact of GPS technology on Inuit hunters who live in the remote northern reaches of Canada. As GPS devices proliferated, hunters lost what a tribal elder describes as “the wisdom and knowledge of the Inuit”: premodern wayfinding methods that rely on natural phenomena like wind, stars, tides and snowdrifts to navigate. Inuit wayfinding skills are truly impressive. The anthropologist Claudio Aporta reports traveling with a hunter across twenty square kilometers of flat featureless land as he located seven fox traps that he had never seen before, set by his uncle twenty five years prior. These talents have been eroded as Inuit hunters have adopted GPS devices that seem to do the job equally well, but have the unexpected side effect of increasing injuries and deaths as hunters succumb to equipment malfunctions and the twin perils of automation complacency and bias.

    Laboring under the misconceptions of the one-dimensional fallacy, it would be natural to take this as a smoking gun of Carr’s alleged anti-technology perspective and privileging of the premodern, but the closing sentences of the chapter point us away from that conclusion:

    We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us. (151)

    These words segue into the following chapter, where Carr identifies the dominant philosophy that designs automation technologies to inadvertently produce problems that he identified earlier: technology-centered automation. This approach to design is distrustful of humans, perhaps even misanthropic. It views us as weak, inefficient, unreliable and error-prone, and seeks to minimize our involvement in the work to be done. It institutes a division of labor between human and machine that gives the bulk of the work over to the machine, only seeking human input in anomalous situations. This philosophy is behind modern autopilot systems that hand off control to human pilots for only a few minutes in a flight.

    The fundamental argument of the book is that this design philosophy can lead to undesirable consequences. Carr seeks an alternative he calls human-centered automation, an approach that ensures the human operator remains engaged and alert. Autopilot systems designed with this philosophy might return manual control to the pilots at irregular intervals to ensure they remain vigilant and practice their flying skills. It could provide tactile feedback of its operations so that pilots are involved in a visceral way rather than passively monitoring screens. Decision support systems like those used in healthcare could take a secondary role of reviewing and critiquing a decision made by a doctor made rather than the other way around.

    The Glass Cage calls for a fundamental shift in how we understand error. Under the current regime, an error is an inefficiency or an inconvenience, to be avoided at all costs. As defined by Carr, a human-centered approach to design treats error differently, viewing it as an opportunity for learning. He illustrates this with a personal experience of repeatedly failing a difficult mission in the video game Red Dead Redemption, and points to the satisfaction of finally winning a difficult game as an example of what is lost when technology is designed to be too easy. He offers video games as a model for the kinds of technologies he would like to see: tools that engage us in difficult challenges, that encourage us to develop expertise and to experience flow states.

    But Carr has an idiosyncratic definition of human-centered design which becomes apparent when he counterposes his position against the prominent design consultant Peter Merholz. Echoing premises almost universally adopted by designers, Merholz calls for simple, frictionless interfaces and devices that don’t require a great deal of skill, memorization or effort to operate. Carr objects that that eliminates learning, skill building and mental engagement—perhaps a valid criticism, but it’s strange to suggest that this reflects a misanthropic technology-centered approach.

    A frequently invoked maxim of human-centered design is that technology should adapt to people, rather than people adapting to technology. In practice, the primary consideration is helping the user achieve his or her goal as efficiently and effectively as possible, removing unnecessary obstacles and delays that stand in the way. Carr argues for the value of challenges, difficulties and demands placed on users to learn and hone skills, all of which fall under the prohibited category of people adapting to technology.

    In his example of playing Red Dead Redemption, Carr prizes the repeated failure and frustration before finally succeeding at the game. From the lens of human-centered design, that kind of experience is seen as a very serious problem that should be eliminated quickly, which is probably why this kind of design is rarely employed at game studios. In fact, it doesn’t really make sense to think of a game player as having a goal, at least not from the traditional human-centered standpoint. The driver of a car has a goal: to get from point A to point B; a Facebook user wants to share pictures with friends; the user of a word processor wants to write a document; and so on. As designers, we want to make these tasks easy, efficient and frictionless. The most obvious way of framing game play is to say that the player’s goal is to complete the game. We would then proceed to remove all obstacles, frustrations, challenges and opportunities for error that stand in the way so that they may accomplish this goal more efficiently, and then there would be nothing left for them to do. We would have ruined the game.

    This is not necessarily the result of a misanthropic preference for technology over humanity, though it may be. It is also the likely outcome of a perfectly sincere and humanistic belief that we shouldn’t inconvenience the user with difficulties that stand in the way of their goal. As human factors researcher David Woods puts it, “The road to technology-centered systems is paved with human-centered intentions,”[3] a phrasing which suggests that these two philosophies aren’t quite as distinct as Carr would have us believe.

    Carr’s vision of human-centered design differs markedly from contemporary design practice, which stresses convenience, simplicity, efficiency for the user and ease of use. In calling for less simplicity and convenience, he is in effect critical of really existing human-centeredness, and that troubles any reading of The Glass Cage that views it a book about restoring our humanity in a world driven mad by machines.

    It might be better described as a book about restoring one conception of humanity in a world driven mad by another. It is possible to argue that the difference between the two appears in psychoanalytic theory as the difference between drive and desire. The user engages with a technology in order to achieve a goal because they perceive themselves as lacking something. Through the use of this tool, they believe they can regain it and fill in this lack. It follows that designers ought to help the user achieve their goal—to reach their desire—as quickly and efficiently as possible because this will satisfy them and make them happy.

    But the insight of psychoanalysis is that lack is ontological and irreducible, it cannot be filled in any permanent way because any concrete lack we experience is in fact metonymic for a constitutive lack of being. As a result, as desiring subjects we are caught in an endless loop of seeking out that object of desire, feeling disappointed when we find it because it didn’t fulfill our fantasies and then finding a new object to chase. The alternative is to shift from desire to drive, turning this failure into a triumph. Slavoj Žižek describes drive as follows: “the very failure to reach its goal, the repetition of this failure, the endless circulation around the object, generates a satisfaction of its own.”[4]

    This satisfaction is perhaps what Carr aims at when he celebrates the frustrations and challenges of video games and of work in general. That video games can’t be made more efficient without ruining them indicates that what players really want is for their goal to be thwarted, evoking the psychoanalytic maxim that summarizes the difference between desire and drive: from the missing/lost object, to loss itself as an object. This point is by no means tangential. Early on, Carr introduces the concept of miswanting, defined as the tendency to desire what we don’t really want and won’t make us happy—in this case, leisure and ease over work and challenge. Psychoanalysts holds that all human wanting (within the register of desire) is miswanting. Through fantasy, we imagine an illusory fullness or completeness of which actual experience always falls short.[5]

    Carr’s revaluation of challenge, effort and, ultimately, dissatisfaction cannot represent a correction of the error of miswanting­–of rediscovering the true source of pleasure and happiness in work. Instead, it radicalizes the error: we should learn to derive a kind of satisfaction from our failure to enjoy. Or, in the final chapter, as Carr says of the farmer in Robert Frost’s poem Mowing, who is hard at work and yet far from the demands of productivity: “He’s not seeking some greater truth beyond the work. The work is the truth.”

    ***

    Nicholas Carr has a track record of provoking designers to rethink their assumptions. With The Shallows, along with other authors making related arguments, he influenced software developers to create a new class of tools that cut off the internet, eliminate notifications or block social media web sites to help us concentrate. Starting with OS X Lion in 2011, Apple began offering a full screen mode that hides distracting interface elements and background windows from inactive applications.

    What transformative effects could The Glass Cage have on the way software is designed? The book certainly offers compelling reasons to question whether ease of use should always be paramount. Advocates for simplicity are rarely challenged, but they may now find themselves facing unexpected objections. Software could become more challenging and difficult to use—not in the sense of a recalcitrant WiFi router that emits incomprehensible error codes, but more like a game. Designers might draw inspiration from video games, perhaps looking to classics like the first level of Super Mario Brothers, a masterpiece of level design that teaches the fundamental rules of the game without ever requiring the player to read the manual or step through a tutorial.

    Everywhere that automation now reigns, new possibilities announce themselves. A spell checker might stop to teach spelling rules, or make a game out of letting the user take a shot at correcting mistakes it has detected. What if there was a GPS navigation device that enhanced our sense of spatial awareness rather than eroding it, that engaged our attention on to the road rather than let us tune out. Could we build an app that helps drivers maintain good their skills by challenging them to adopt safer and more fuel-efficient driving techniques?

    Carr points out that the preference for easy-to-use technologies that reduce users’ engagement is partly a consequence of economic interests and cost reduction policies that profit from the deskilling and reduction of the workforce, and these aren’t dislodged simply by pressing for new design philosophies. But to his credit, Carr has written two best-selling books aimed at the general interest reader on the fairly obscure topic of human-computer interaction. User experience designers working in the technology industry often face an uphill battle in trying to build human-centered products (however that is defined). When these matters attract public attention and debate, it makes their job a little easier.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org. He has previously written about the Spike Jonze film Her for The b2 Review Digital Studies section.

    Back to the essay

    _____

    [1] Differences between individual technologies are ignored and replaced by the monolithic master category of Technology. Jonah Lehrer’s review of Nicholas Carr’s 2010 book The Shallows in the New York Times exemplifies such thinking. Lehrer finds contradictory evidence against Carr’s argument that the internet is weakening our mental faculties in scientific studies that attribute cognitive improvements to playing video games, a non sequitur which gains meaning only by subsuming these two very different technologies under a single general category of Technology. Evgeny Morozov is one of the sharpest critics of this tendency. Here one is reminded of his retort in his article “Ghosts in the Machine” (2013): “That dentistry has been beneficial to civilization tells us very little about the wonders of data-mining.”

    [2] There are a range of possible causes for this constrictive linear geometry: a tendency to see a progressive narrative of history; a consumerist notion of agency which only allows shoppers to either upgrade or stick with what they have; or the oft-cited binary logic of digital technology. One may speculate about the influence of the popular technology marketing book by Geoffrey A. Moore, Crossing the Chasm (2014) whose titular chasm is the gap between the elite group of innovators and early adopters—the avant-garde—and the recalcitrant masses bringing up the rear who must be persuaded to sign on to their vision.

    [3] David D. Woods and David Tinapple (1999). “W3: Watching Human Factors Watch People at Work.” Proceedings of the 43rd Annual Meeting of the Human Factors and Ergonomics Society (1999).

    [4] Slavoj Žižek, The Parallax View (2006), 63.

    [5] The cultural and political implications of this shift are explored at length in Todd McGowan’s two books The End of Dissatisfaction: Jacques Lacan and the Emerging Society of Enjoyment (2003) and Enjoying What We Don’t Have: The Political Project of Psychoanalysis (2013).