boundary 2

Tag: surveillance

  • Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    Sharrona Pearl — In the Shadow of the Valley (Review of Anna Wiener, Uncanny Valley)

    a review of Anna Wiener, Uncanny Valley: A Memoir (Macmillan, 2020)

    by Sharrona Pearl

    ~

    Uncanny Valley, the latest, very well-publicized memoir of Silicon Valley apostasy, is, for sure, a great read.  Anna Wiener writes beautiful words that become sentences that become beautiful paragraphs and beautiful chapters.  The descriptions are finely wrought, and if not quite cinematic than very, very visceral.  While it is a wry and tense and sometimes stressful story, it’s also exactly what it says it is: a memoir.  It’s the story of her experiences.  It captures a zeitgeist – beautifully, and with nuance and verve and life. It highlights contradictions and complications and confusions: hers, but also of Silicon Valley culture itself.  It muses upon them, and worries them, and worries over them.  But it doesn’t analyze them and it certainly doesn’t solve them, even if you get the sense that Wiener would quite like to do so.  That’s okay.  Solving the problems exposed by Silicon Valley tech culture and tech capitalism is quite a big ask.

    Wiener’s memoir tells the story of her accidental immersion into, and gradual (too gradual?) estrangement from, essentially, Big Tech.  A newly minted graduate from a prestigious small liberal arts college (of course), Wiener was living in Brooklyn (of course) while working as an underpaid assistant in a small literary agency (of course.) “Privileged and downwardly mobile,” as she puts it, Wiener was just about getting by with some extra help from her parents, embracing being perpetually broke as she party-hopped and engaged in some light drug use while rolling her eyes at all the IKEA furniture.  In as clear a portrait of Brooklyn as anything could be, Wiener’s friends spent 2013 making sourdough bread near artisan chocolate shops while talking on their ironic flip phones.  World-weary at 24, Wiener decides to shake things up and applies for a job at a Manhattan-based ebook startup.  It’s still about books, she rationalizes, so the startup part is almost beside the point.  Or maybe, because it’s still about books, the tech itself can be used for good.  Of course, neither of these things turn out to be true for either this startup, or tech itself.  Wiener quickly discovers (and so do her bosses) that she’s just not the right fit.  So she applies for another tech job instead.  This time in the Bay Area.  Why not?  She’d gotten a heady dose of the optimism and opportunity of startup culture, and they offered her a great salary.  It was a good decision, a smart and responsible and exciting decision, even as she was sad to leave the books behind.  But honestly, she’d done that the second she joined the first startup.  And in a way, the entire memoir is Wiener figuring that out.

    Maybe Wiener’s privilege (alongside generational resources and whiteness) is living in a world where you don’t have to worry about Silicon Valley even as it permeates everything.  She and her friends were being willfully ignorant in Brooklyn; it turns out, as Wiener deftly shows us, you can be willfully ignorant from the heart of Silicon Valley too.  Wiener lands a job at one startup and then, at some point, takes a pay cut to work at another whose culture is a better fit.  “Culture” does a lot of work here to elide sexism, harassment, surveillance, and violation of privacy.  To put it another way: bad stuff is going on around Wiener, at the very companies she works for, and she doesn’t really notice or pay attention…so we shouldn’t either.  Even though she narrates these numerous and terrible violations clearly and explicitly, we don’t exactly clock them because they aren’t a surprise.  We already knew.  We don’t care.  Or we already did the caring part and we’ve moved on.

    If 2013 feels both too early and too late for sourdough (weren’t people making bread in the 1950s because they had to?  And in 2020 because of COVID?) that’s a bit like the book itself.  Surely the moment for Silicon Valley Seduction and Cessation was the early 2000s?  And surely our disillusionment from the surveillance of Big Tech and the loss of privacy didn’t happen until after 2016? (Well, if you pay attention to the timeline in the book, that’s when it happened for Wiener too).  I was there for the bubble in the early aughts.  How could anyone not know what to expect?  Which isn’t to say that this memoir isn’t a gripping and illustrative mise-en-scène.  It’s just that in the era of Coded Bias and Virginia Eubanks and Safiya Noble and Meredith Broussard and Ruha Benjamin and Shoshana Zuboff… didn’t we already know that Big Tech was Bad?  When Wiener has her big reveal in learning from her partner Noah that “we worked in a surveillance company,” it’s more like: well, duh.  (Does it count as whistleblowing if it isn’t a secret?)

    But maybe that wasn’t actually the big reveal of the book.  Maybe the point was that Wiener did already know, she just didn’t quite realize how seductive power is, how pervasive an all-encompassing a culture can be, and how easy distinctions between good and bad don’t do much for us in the totalizing world of tech.  She wants to break that all down for us.  The memoir is kind of Tech Tales for Lit Critics, which is distinct from Tech for Dummies ™ because maybe the critics are the smart ones in the end.  The story is for “us;” Wiener’s tribe of smart and idealistic and disaffected humanists.  (Truly us, right dear readers?)  She makes it clear that even as she works alongside and with an army of engineers, there is always an us and them.  (Maybe partly because really, she works for the engineers, and no matter what the company says everyone knows what the hierarchy is.)  The “us” are the skeptics and the “them” are the cult believers except that, as her weird affectation of never naming any tech firms (“an online superstore; a ride-hailing app; a home-sharing platform; the social network everyone loves to hate,”) we are all in the cult in some way, even if we (“we”) – in Wiener’s Brooklyn tribe forever no matter where we live – half-heartedly protest. (For context: I’m not on Facebook and I don’t own a cell phone but PLEASE follow me on twitter @sharronapearl).

    Wiener uses this “NDA language” throughout the memoir.  At first it’s endearing – imagine a world in which we aren’t constantly name-checking Amazon and AirBnB.  Then its addicting – when I was grocery shopping I began to think of my local Sprouts as “a West-Coast transplant fresh produce store.”  Finally, it’s annoying – just say Uber, for heaven’s sake!  But maybe there’s a method to it: these labels makes the ubiquity of these platforms all the more clear, and forces us to confront just how very integrated into our lives they all are.  We are no different from Wiener; we all benefit from surveillance.

    Sometimes the memoir feels a bit like stunt journalism, the tech take on The Year of Living Biblically or Running the Books.  There’s a sense from the outset that Wiener is thinking “I’ll take the job, and if I hate it I can always write about it.”  And indeed she did, and indeed she does, now working as the tech and start-up correspondent for The New Yorker.  (Read her articles: they’re terrific.)  But that’s not at all a bad thing: she tells her story well, with self-awareness and liveliness and a lot of patience in her sometimes ironic and snarky tone.  It’s exactly what it we imagine it to be when we see how the sausage is made: a little gross, a lot upsetting, and still really quite interesting.

    If Wiener feels a bit old before her time (she’s in her mid-twenties during her time in tech, and constantly lamenting how much younger all her bosses are) it’s both a function of Silicon Valley culture and its veneration of young male cowboys, and her own affectations.  Is any Brooklyn millennial ever really young?  Only when it’s too late.  As a non-engineer and a woman, Wiener is quite clear that for Silicon Valley, her time has passed.  Here is when she is at her most relatable in some ways: we have all been outsiders, and certainly many of would be in that setting.  At the same time, at 44 with three kids, I feel a bit like telling this sweet summer child to take her time.  And that much more will happen to her than already has.  Is that condescending?  The tone brings it out in me.  And maybe I’m also a little jealous: I could do with having made a lot of money in my 20s on the road to disillusionment with power and sexism and privilege and surveillance.  It’s better – maybe – than going down that road without making a lot of money and getting to live in San Francisco.  If, in the end, I’m not quite sure what the point of her big questions are, it’s still a hell of a good story.  I’m waiting for the movie version on “the streaming app that produces original content and doesn’t release its data.”

    _____

    Sharrona Pearl (@SharronaPearl) is a historian and theorist of the body and face.  She has written many articles and two monographs: About Faces: Physiognomy in Nineteenth-Century Britain (Harvard University Press, 2010) and Face/On: Face Transplants and the Ethics of the Other (University of Chicago Press, 2017). She is Associate Professor of Medical Ethics at Drexel University.

    Back to the essay

  • Moira Weigel — Palantir Goes to the Frankfurt School

    Moira Weigel — Palantir Goes to the Frankfurt School

    Moira Weigel

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Since the election of Donald Trump, a growing body of research has examined the role of digital technologies in new right wing movements (Lewis 2018; Hawley 2017; Neiwert 2017; Nagle 2017). This article will explore a distinct, but related, subject: new right wing tendencies within the tech industry itself. Our point of entry will be an improbable document: a German language dissertation submitted by an American to the faculty of social sciences at J. W. Goethe University of Frankfurt in 2002. Entitled Aggression in the Life-World, the dissertation aims to describe the role that aggression plays in social integration, or the set of processes that lead individuals in a given society to feel bound to one another. To that end, it offers a “systematic” reinterpretation of Theodor Adorno’s Jargon of Authenticity (1973). It is of interest primarily because of its author: Alexander C. Karp.[1]

    Karp, as some readers may know, did not pursue a career in academia. Instead, he became the CEO of the powerful and secretive data analytics company, Palantir Technologies. His dissertation has inspired speculation for years, but no journalist or scholar has yet analyzed it. Doing so, I will argue that it offers insight into the intellectual formation of an influential network of actors in and around Silicon Valley, a network articulating ideas and developing business practices that challenge longstanding beliefs about how Silicon Valley thinks and works.

    For decades, a view prevailed that the politics of both digital technologies and most digital technologists were liberal, or neoliberal, depending on how critically the author in question saw them. Liberalism and neoliberalism are complex and contested concepts. But broadly speaking, digital networks have been seen as embodying liberal or neoliberal logics insofar as they treated individuals as abstractly equal, rendering social aspects of embodiment like race and gender irrelevant, and allowing users to engage directly in free expression and free market competition (Kolko and Nakamura, 2000; Chun 2005, 2011, 2016). The ascendance of the Bay Area tech industry over competitors in Boston or in Europe was explained as a result of its early adoption of new forms of industrial organization, built on flexible, short-term contracts and a strong emotional identification between workers and their jobs (Hayes 1989; Saxenian 1994).

    Technologists themselves were said to embrace a new set of values that the British media theorists Richard Barbrook and Andy Cameron dubbed the “Californian Ideology.” This “anti-statist gospel of cybernetic libertarianism… promiscuously combine[d] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies,” they wrote; it answered the challenge posed by the social liberalism of the New Left by “resurrecting economic liberalism” (1996, 42 & 47). Fred Turner attributed this synthesis to the “New Communalists,” members of the counterculture who “turn[ed] away from questions of gender, race, and class, and toward a rhetoric of individual and small group empowerment” (2006, 97). Nonetheless, he reinforced the broad outlines that Barbrook and Cameron had sketched. Turner further showed that midcentury critiques of mass media, and their alleged tendency to produce authoritarian subjects, inspired faith that digital media could offer salutary alternatives—that “democratic surrounds” would sustain democracy by facilitating the self-formation of democratic subjects (2013). 

    Silicon Valley has long supported Democratic Party candidates in national politics and many tech CEOs still subscribe to the “hybrid” values of the Californian Ideology (Brookman et al. 2019). However, in recent years, tensions and contradictions within Silicon Valley liberalism, particularly between commitments to social and economic liberalism, have become more pronounced. In the wake of the 2016 presidential election, several software engineers emerged as prominent figures on the “alt-right,” and newly visible white nationalist media entrepreneurs reported that they were drawing large audiences from within the tech industry.[2] The leaking of information from internal meetings at Google to digital outlets like Breitbart and Vox Popoli suggests that there was at least some truth to their claims (Tiku 2018). Individual engineers from Google, YouTube, and Facebook have received national media attention after publicly criticizing the liberal culture of their (former) workplaces and in some cases filing lawsuits against them.[3] And Republican politicians, including Trump (2019a, 2019b), have cited these figures as evidence of “liberal bias” at tech firms and the need for stronger government regulation (Trump 2019a; Kantrowitz 2019).

    Karp’s Palantir cofounder (and erstwhile roommate) Peter Thiel looms large in an emerging constellation of technologists, investors, and politicians challenging what they describe as hegemonic social liberalism in Silicon Valley. Thiel has been assembling a network of influential “contrarians” since he founded the Stanford Review as an undergraduate in the late 1980s (Granato 2017). In 2016, Thiel became a highly visible supporter of Donald Trump, speaking at the Republican National Convention, donating $1.25 million in the final weeks of Trump’s campaign for president (Streitfeld 2016a), and serving as his “tech liaison” during the transition period (Streitfeld 2016b). (Earlier in the campaign, Thiel had donated $1 million to the Defeat Crooked Hillary Super PAC backed by Robert Mercer, and overseen by Steve Bannon and Kellyanne Conway; see Green 2017, 200.) Since 2016, he has met with prominent figures associated with the alt-right and “neoreaction”[4] and donated at least $250,000 to support Trump’s reelection in 2020 (Federal Election Commission 2018). He has also given to Trump allies including Missouri Senator Josh Hawley, who has repeatedly attacked Google and Facebook and sponsored multiple bills to regulate tech platforms, citing the threat that they pose to conservative speech.[5]

    Thiel’s affinity with Trumpism is not merely personal or cultural; it aligns with Palantir’s business interests. According to a 2019 report by Mijente, since Trump came into office in 2017, Palantir contracts with the United States government have increased by over a billion dollars per year. These include multiyear contracts with the US military (Judson 2019; Hatmaker 2019) and with Immigrations and Customs Enforcement (ICE) (MacMillan and Dwoskin 2019); Palantir has also worked with police departments in New York, New Orleans, and Los Angeles (Alden 2017; Winston 2018; Harris 2018).

    Karp and Thiel have both described these controversial contracts using the language of “nation” and “civilization.” Confronted by critical journalistic coverage (Woodman 2017, Winston 2018, Ahmed 2018) and protests  (Burr 2017, Wiener 2017), as well as internal actions by concerned employees (MacMillan and Dwoskin, 2019), Thiel and Karp have doubled down, characterizing the company as “patriotic,” in contrast to its competitors. In an interview conducted at Davos in January 2019, Karp said that Silicon Valley companies that refuse to work with the US government are “borderline craven” (2019b). At a speech at the National Conservatism Conference in July 2019, Thiel called Google “seemingly treasonous” for doing business with China, suggested that the company had been infiltrated by Chinese agents, and called for a government investigation (Thiel 2019a). Soon after, he published an Op Ed in the New York Times that restated this case (Thiel 2019b).

    However, Karp has cultivated a very different public image from Thiel’s, supporting Hillary Clinton in 2016, saying that he would vote for any Democratic presidential candidate against Trump in 2020 (Chafkin 2019), and—most surprisingly—identifying himself as a Marxist or “neo-Marxist” (Waldman et al. 2018, Mac 2017, Greenberg 2013). He also refers to himself as a “socialist” (Chafkin 2019) and according to at least one journalist, regularly addresses his employees on Marxian thought (Greenberg 2013). On one level, Karp’s dissertation clarifies what he means by this: For a time, he engaged deeply with the work of several neo-Marxist thinkers affiliated with the Institute for Social Research in Frankfurt. On another level, however, Karp’s dissertation invites further perplexity, because right wing movements, including Trump’s, evince special antipathy for precisely that tradition.

    Starting in the early 1990s, right-wing think tanks in both Germany and the United States began promoting conspiratorial narratives about critical theory. The conspiracies allege that, ever since the failure of “economic Marxism” in World War I, “neo-“ or “cultural Marxists” have infiltrated academia, media, and government. From inside, they have carried out a longstanding plan to overthrow Western civilization by criticizing Western culture and imposing “political correctness.” To the extent that it attaches to real historical figures, the story typically begins with Antonio Gramsci and György Lukács, goes through Max Horkheimer, Theodor Adorno, and other wartime émigrés to the United States, particularly those involved in state-sponsored mass media research, and ends abruptly with Herbert Marcuse and his influence on student movements of the 1960s (Moyn 2018; Huyssen 2017; Jay 2011; Berkowitz 2003).

    The term “Cultural Marxism” directly echoes the Nazi theory of “Cultural Bolshevism”; the early proponents of the Cultural Marxism conspiracy theory were more or less overt antisemites and white nationalists (Berkowitz 2003). However, in the 2000s and 2010s, right wing politicians and media personalities helped popularize it well beyond that sphere.[6] During the same time, it has gained traction in Silicon Valley, too.  In recent years, several employees at prominent tech firms have publicly decried the influence of Cultural Marxists, while making complaints about “political correctness” or lack of “viewpoint diversity.”[7]

    Thiel has long expressed similar frustrations.[8] So how is it that this prominent opponent of “cultural Marxism” works with a self-described neo-Marxist CEO? Aggression in the Life World casts light on the core beliefs that animate their partnership. The idiosyncratic adaptation of Western Marxism that it advances does not in fact place Karp at odds with the nationalist projects that Thiel has advocated, and Palantir helps enact. On the contrary, by attempting to render critical theoretical concepts “systematic,” Karp reinterprets them in a way that legitimates the work he would go on to do. Shortly before Palantir began developing its infrastructure for identification and authentication, Aggression in the Life-World articulated an ideology of these processes.

    Freud Returns to Frankfurt

    Tech industry legend has it that Karp wrote his dissertation under Jürgen Habermas (Silicon Review 2018; Metcalf 2016; Greenberg 2013). In fact, he earned his doctorate from a different part of Goethe University than the one in which Habermas taught: not at the Institute for Social Research but in the Division of Social Sciences. Karp’s primary reader was the social psychologist Karola Brede, who then held a joint appointment at Goethe University’s Sociology Department and at the Sigmund Freud Institute; she and her younger colleague Hans-Joachim Busch appear listed as supervisors on the front page. The confusion is significant, and not only because it suggests an exaggeration. It also obscures important differences of emphasis and orientation between Karp’s advisors and Habermas. These differences directly shaped Karp’s graduate work.

    Habermas did engage with psychoanalysis early in his career.  In the spring and summer of 1959, he attended every one of a series of lectures organized by the Institute for Social Research to mark the centenary of Freud’s birth (Müller-Doohm 2016, 79; Brede and Mitscherlich-Nielsen 1996, 391). He went on to become close friends and even occasionally co-teach  (Brede and Mitscherlich-Nielsen 1996, 395) with one of the organizers and speakers of this series, Alexander Mitscherlich, who had long campaigned with Frankfurt School founder Max Horkheimer for the funds to establish the Sigmund Freud Institute and became the first director when it opened the following year. In 1968, shortly after Mitscherlich and his wife, Margarete, published their influential book, The Inability to Mourn, Habermas developed his first systemic critical social theory in Knowledge and Human Interests (1972). Nearly one third of that book is devoted to psychoanalysis, which Habermas treats as exemplary of knowledge constituted by the “critical” or “emancipatory interest”—that is, the species interest in engaging in critical reflection in order to overcome domination. However, in the 1970s, Habermas turned away from that book’s focus on philosophical anthropology toward the ideas about linguistic competence that culminated in his Theory of Communicative Action; in 1994, Margarete Mitscherlich recounted that Habermas had “gotten over” psychoanalysis in the process of writing that book (1996, 399). Karp’s interest in the theory of the drives, and in aggression in particular, was not drawn from Habermas but from scholars at the Freud Institute, where it was a major focus of research and public debate for decades.

    Freud himself never definitively decided whether he believed that a death drive existed. The historian Dagmar Herzog has shown that the question of aggression—and particularly the question of whether human beings are innately driven to commit destructive acts—dominated discussions of psychoanalysis in West Germany in the 1960s and 1970s. “In no other national context would the attempt to make sense of aggression become such a core preoccupation,” Herzog writes (2016, 124). After fascism, this subject was highly politicized. For some, the claim that aggression was a primary drive helped to explain the Nazi past: if all humans had an innate drive to commit violence, Nazi crimes could be understood as an extreme example of a general rule. For others, this interpretation risked naturalizing and normalizing Nazi atrocities. “Sex-radicals” inspired by the work of Wilhelm Reich pointed out that Freud had cited the libido as the explanation for most phenomena in life. According to this camp, Nazi aggression had been the result not of human nature but of repressive authoritarian socialization. In his own work, Mitscherlich attempted to elaborate a series of compromises between the conservative position (that hierarchy and aggression were natural) and the radical one (that new norms of anti-authoritarian socialization could eliminate hierarchy entirely; Herzog 2016, 128-131). Klaus Horn, the long-time director of the division of social psychology at the Freud Institute, whose collected writings Karp’s supervisor Hans-Joachim Busch edited, contested the terms of the disagreement. The entire point of sophisticated psychoanalysis, Horn argued, was that culture and biology were mutually constitutive and interacted continuously; to name one or the other as the source of human behavior was nonsensical (Herzog 2016, 135).

    Karp’s primary advisor, Karola Brede, who joined the Sigmund Freud Institute in 1967, began her career in the midst of these debates (Bareuther et al. 1989, 713). In her first book, published in 1972, Brede argued that “psychosomatic” disturbances had to be understood in the context of socialization processes. Not only did neurotic conflicts play a role in somatic illness; such illness constituted “socio-pathological” expressions of an increase in the forms of repression required to integrate individuals into society (Brede 1972). In 1976, Brede published a critique of Konrad Lorenz, whose bestselling work, On Aggression, had triggered much of the initial debate with Alexander Mitscherlich and others at the Institute, in the journal Psyche (“Der Trieb als humanspezifische Kategorie”; see Herzog 2016, 125-7).  Since the 1980s, her monographs have focused on work and workplace sociology, and on the role that psychoanalysis should play in critical social theory. Individual and Work (1986) explored the “psychoanalytic costs involved in developing one’s own labor power.” The Adventures of Adjusting to Everyday Work (1995) drew on empirical studies of German workplaces to demonstrate that psychodynamic processes played a key role in professional life, shaping processes of identity formation, authoritarian behavior, and gendered self-identity in the workplace. In that book, Brede criticizes Habermas for undervaluing psychoanalytic concepts—and unconscious aggression in particular—as social forces. Brede argues that the importance that Habermas assigned to “intention” in Theory of Communicative Action prevented him from recognizing the central role that the unconscious played in constituting identity, action, and subjectivity (1995, 223 & 225). At the same time, she was editing multiple volumes on psychoanalytic theory, including feminist perspectives in psychoanalysis, and in a series of journal articles in the 1990s, developed a focus on antisemitism and Germany’s relationship to its troubled history (Brede 1995, 1997, 2000).

    During his time as a PhD student, Karp seems to have worked very closely with Brede. The sole academic journal article that he published he co-authored with her in 1997. (An analysis of Daniel Goldhagen’s bestselling 1996 study, Hitler’s Willing Executioners, the article attempted to build on Goldhagen’s thesis by characterizing a specific, “eliminationist” form of antisemitism that Karp and Brede argued could only be understood from the perspective of Freudian psychoanalytic theory; see Brede and Karp 1997, 621-6.) Karp wrote the introduction for a volume of the Proceedings of the Freud Institute, which Brede edited (Brede et al. 1999, 5-7). The chapter that Karp contributed to that volume would appear in his dissertation, three years later, in almost identical form. Karp’s dissertation itself also closely followed the themes of Brede’s research.

    Aggression in the Life World

    The full title of Karp’s dissertation captures its patchwork quality: Aggression in the Life-World: Expanding Parsons’ Concept of Aggression Through a Description of the Connection Between Jargon, Aggression, and Culture. “This work began,” the opening sentences recall, “with the observation that many statements have the effect of relieving unconscious drives, not in spite, but because, of the fact that they are blatantly irrational” (Karp 2002, 2). Karp proposes that such statements provide relief by allowing a speaker to have things both ways: to acknowledge the existence of a social order and, indeed, demonstrate specific knowledge of that order while, at the same time, expressing taboo wishes that contravene social norms. As result, rather than destroy social order, such irrational statements integrate the speaker into society while also providing compensation for the pains of being integrated. To describe these kinds of statements Karp indicates that he will borrow a concept from the late work of Adorno: “jargon.” However, Karp announces that he will critique Adorno for depending too much on the very phenomenological tradition that his Jargon of Authenticity is meant to criticize. Adorno’s concept is not a concept at all, Karp alleges, but a “reservoir for collecting Adorno-poetry” (Sammelbecken Adornoscher Dichtung) (2002, 58). Karp’s own goal is to clarify jargon into an analytical concept that could then be incorporated into a classical sociological framework. As synecdoche for classical sociology, Karp takes the work of Talcott Parsons.

    The second chapter of Karp’s dissertation, a reading and critique of Parsons, had appeared in the Freud Institute publication, Cases for the Theory of the Drives. In his editor’s introduction to that volume, Karp had stated that the goal of their group had been to integrate psychoanalytic concepts in general and Freud’s theory of the drives in particular into frameworks provided by classical sociology. The volume begins with an essay by Brede on the failure of sociology as a discipline to account for the role that aggression plays in social integration. (Brede 1999, 11-45, credits Georg Simmel with having developed an account of the active role that aggression played in creating social cohesion; more on that below.) Karp reiterates Brede’s complaint, directing it against Parsons, whose account of aggression he calls “incomplete” or “watered down” (2002, 11). In the version that appears in his dissertation, several sections of literature review establish background assumptions and describe what Karp takes to be Parsons’ achievement: integrating the insights of Émile Durkheim and Sigmund Freud. Taking, from Durkheim, a theory of how societies develop systems of norms, and from Freud, how individuals internalize them, Parsons developed an account of culture as the site where the integration of personality and society takes place.

    For Parsons, pace Karp, culture itself is best understood as a system constituted through “interactions.” Karp credits Parsons with shifting the paradigm from a subject of consciousness to a subject in communication—translating the Freudian superego into sociological form, so that it appears, not as a moral enforcer, but as a psychic structure communicating cultural norms to the conscious subject. Yet, Karp protests that there are, in fact, parts of personality not determined by culture, and not visible to fellow members of a culture so long as an individual does not deviate from established norms of interaction. Parsons’ theory of aggression remains incomplete on at least two counts, then. First, Karp argues, Parsons fails to recognize aggression as a primary drive, treating it only as a secondary result that follows when the pleasure principle finds itself thwarted. Karp, by contrast, adopts the position that a drive toward death or destruction is at least as fundamental as the pleasure principle. Second, because Parsons defines aggression in terms of harms to social norms, he cannot explain how aggression itself can become a social norm, as it did in Nazi Germany. For an explanation of how aggressive impulses come to be integrated into society, Karp turns instead to Adorno.

    In Adorno’s Jargon of Authenticity, Karp found an account of how aggression constitutes itself in language and, through language, mediates social integration (2002, 57). Adorno’s lengthy essay, which he had originally intended to constitute one part of Negative Dialectics, resists easy summary. The essay begins by identifying theological overtones that, Adorno says, emanate from the language used by German existentialists—and by Martin Heidegger in particular. Adorno cites not only “authenticity,” but terms like “existential,” “in the decision,” “commission,” “appeal,” and “encounter,” as exemplary” (3). While the existentialists claim that such language constitutes a form of resistance to conformity, Adorno argues that it has in fact become highly standardized: “Their unmediated language they receive from a distributor” (14). Making fetishes of these particular terms, the existentialists decontextualize language in several respects. They do so at the level of the sentence—snatching certain, favored words out of the dialectical progression of thought as if meaning could exist without it. At the same time, the existentialist presents “words like ‘being’ as if they were the most concrete terms” and could obviate abstraction, the dialectical movement within language. The function of this rhetorical practice is to make reality seem simply present, and give the subject an illusion of self-presence—replacing consciousness of historical conditions with an illusion of immediate self-experience. The “authenticity” generated by jargon therefore depends on forgetting or repressing the historically objective realities of social domination.

    Beyond simply obscuring the realities of domination, Adorno continues, the jargon of authenticity spiritualizes them.  For instance, Martin Heidegger turns the real precarity of people who might at any time lose their jobs and homes into a defining condition of Dasein: “The true need for residence consists in the fact that mortals must first learn to reside” (26). The power of such jargon—which transforms the risk of homelessness into an essential trait of Dasein—comes from the fact that it expresses human need, even as it disavows it. To this extent, jargon has an a- or even anti-political character: it disguises current and contingent effects of social domination into eternal and unchangeable characteristics of human existence. “The categories of jargon are gladly brought forward, as though they were not abstracted from generated and transitory situations but rather belonged to the essence of man,” Adorno writes. “Man is the ideology of dehumanization” (48). Jargon turns fascist insofar as it leads the person who uses it to perceive historical conditions of domination—including their own domination—as the very source of their identity. “Identification with that which is inevitable remains the only consolation of this philosophy of consolation.” Adorno writes. “Its dignified mannerism is a reactionary response to the secularization of death” (143, 144).

    Karp says at the outset that his goal is to make Adorno’s collection of observations about jargon “systematic.” In order to do so, he approaches the subject from a different perspective than Adorno did: focused on the question of what psychological needs jargon fulfills. For Karp, the achievement of jargon lies in its “double function” (Doppelfunktion). Jargon both acknowledges the objective forces that oppress people and allows people to adapt or accommodate themselves to those same forces by eternalizing them—removing them from the context of the social relations where they originate, and treating them as features of human existence in general. Jargon addresses needs that cannot be satisfied, because they reflect the realities of living in a society characterized by domination, but also cannot be acted upon, because they are taboo. For Karp, insofar as jargon is a kind of speech that designates speakers as belonging to an in-group, it also expresses an unconscious drive toward aggression. In jargon we see the aggression that drives individuals to exclude others from the social world doing its binding work. It is on these grounds that Karp argues that aggression is a constitutive part of jargon—its ever-present, if unacknowledged, obverse.

    Karp grants that Adorno is concerned with social life. The Jargon of Authenticity investigates precisely the social function of ontology, or how it turns “authenticity” into a cultural form, circulated within mass culture. Adorno also alludes to the specifically German inheritance of jargon—the resemblance between Heidegger’s celebration of völkisch rural life and Nazi celebration of the same (1973, 3). Yet, Karp argues, Adorno does not provide an account of how a deception or illusion of authenticity came to be a structure in the life-world. Even as he criticizes phenomenological ontology, Adorno relies on a concept of language that is itself phenomenological. Echoing critiques by Axel Honneth (1991) of Horkheimer and Adorno’s failures to account for the unique domain of “the social,” Karp turns to the same thinkers Karola Brede used in her article on “Social Integration and Aggression”: Sigmund Freud and Georg Simmel.

    In that article, Brede develops a reading that joins Freud and Simmel’s accounts of the role of the figure of “the stranger” in modern societies. In Civilization and its Discontents, Brede argues, Freud described “strangers” in terms that initially appear incompatible with the account Simmel had put forth in his famous 1908 “Excursus on the Stranger.” Simmel described the mechanisms whereby social groups exclude strangers in order to eliminate danger—thereby controlling the “monstrous reservoir of aggressivity” that would otherwise threaten social structure. (The quote is from Parsons.) Freud wrote that, despite the Biblical commandment to love our neighbors, and the ban on killing, we experience a hatred of strangers, because they make us experience what is strange in us, and fear what in them cannot be fit into our cultural models. Brede concludes that it is only by combining Freudian psychodynamics with Simmel’s account of the role of exclusion in social formation that critical social theory could account for the forms of violence that dominated the history of the twentieth century (Brede 199, 43).

    Karp contrasts Adorno with both Freud and Simmel, and finds Adorno to be more pessimistic than either of these predecessors. Compared to Freud, who argued that culture successfully repressed both libidinal and destructive drives in the name of moral principles, Karp writes that Adorno regarded culture as fundamentally amoral. Rather than successfully repressing antisocial drives, Karp writes, late capitalist culture sates its members with “false satisfactions.” People look for opportunities to express their needs for self-preservation. However, since they know that their needs cannot be fully satisfied, they simultaneously fall over themselves to destroy the memory of the false fulfillment they have had. Repressed awareness of the false nature of their own satisfaction produces the ambient aggression that people take out on strangers.

    For Simmel, the stranger is part of all modern societies, Karp writes. For Adorno, the stranger extends an invitation to violence. Jargon gains its power from the fact that those who speak, and hear, it really are searching for a lost community. The very presence of the stranger demonstrates that such community cannot be simply given; jargon is powerful precisely in proportion to how much the shared context of life has been destroyed.  It therefore offers a “dishonest answer to an honest longing” for intersubjectivity, gaining strength in proportion to the intensity the need that has been thwarted (Karp 2002, 85).  Wishes that contradict social norms are brought into the web of social relations (Geflecht der Lebenswelt), in such a way that they do not need to be sanctioned or punished for violating social norms (91). On the contrary, they serve to bind members of social groups to one another.

    Testing Jargon

    As a case study to demonstrate the usefulness of his modified concept of jargon, Karp takes up a notorious episode in post-wall German intellectual history: a speech that the celebrated novelist Martin Walser gave in October 1998, at St. Paul’s Church in Frankfurt. The occasion was Walser’s acceptance of the 1998 Peace Prize of the German Book Trade. The novelist had traveled a complex political itinerary by the late 1990s. Documents released in 2007 would uncover the fact that as a teenager, during the final years of the Second World War, Walser joined the Nazi Party and fought as a member of the Wehrmacht. But he first became publicly known as a left-wing writer. In the 1950s, Walser attended meetings of the informal but influential German writer’s association Gruppe 47 and received their annual literary prize for his short story, “Templones Ende”; in 1964 he attended the Frankfurt Auschwitz trials, where low ranking officials were charged and convicted for crimes that they had perpetrated during the Holocaust. In his 1965 essay about that experience, “Our Auschwitz,” Walser insisted on the collective responsibility of Germans for the horrors of the Nazi period; indeed he criticized the emphasis on spectacular cruelty at the trial, and in the media, to the extent that this emphasis allowed the public to maintain an imaginary distance between themselves and the Nazi past (Walser 2015, 217-56). Walser supported Social Democratic Party member Willy Brandt for Chancellor and even joined the German Communist Party during that decade. By the 1980s, however, Walser was widely perceived to have migrated back to the right. And when he gave his speech “Experiences Composing a Sermon” on the sixtieth anniversary of Kristallnacht, he used the occasion to attack the public culture of Holocaust remembrance. Walser described this culture as a “moral cudgel” or “bludgeon” (Moralkeule).

    “Experiences Composing a Sermon” adopts a stream of consciousness, rather than argumentative, style in order to explain why Walser refused to do what he said was expected of him: to speak about the ugliness of German history. Instead, he argued that no further collective memorialization of the Holocaust was necessary. There was no such thing, he said, as collective or shared conscience at all: conscience should be a private matter. Critics and intellectuals he disparaged as “preachers” were “instrumentalizing” and “vulgarizing” memory, when they exhorted the public constantly to reflect on the crimes of the Nazi period. “There is probably such a thing as the banality of good,” Walser quipped, echoing Hannah Arendt (2015, 513). He did not spell out what ends he thought that these “preachers” aimed to instrumentalize German guilt for. He concluded by abruptly calling on the newly elected president Roman Herzog, who was in attendance, to free the former East German spy, Rainer Rupp, from prison. Walser’s speech received a standing ovation—though not, notably, from Ignatz Bubis, then the president of the Central Council of Jews in Germany, who was also in attendance. The next day, in the Frankfurter Allgemeine Zeitung, Bubis called the speech an act of “intellectual arson” (geistiges Brandstiftung). The controversy that followed generated a huge amount of debate among German intellectuals and in the German and international media (Cohen 1998). Two months later, the offices of the Frankfurter Allgemeine Zeitung hosted a formal debate between the two men. It lasted for four hours. FAZ published a transcript of their conversation in a special supplement (Walser and Bubis 1999).

    In February and March 1999, Karola Brede delivered two lectures about the controversy at Harvard University, which she subsequently published in Psyche (2000, 203-33). Brede examined both the text of Walser’s original speech and the transcript of his debate with Bubis in order to determine, first, why Walser’s speech had been received so enthusiastically, and second, whether Walser, despite eschewing explicitly antisemitic language, had in fact “taken the side of anti-Semites.” In order to explain why Walser’s speech had attracted so much attention, Brede carried out a close textual analysis. She found that, although Walser had not presented a very cogent argument, he had successfully staged a “relieving rhetoric” (Entlastungsrhetorik) that freed his audience from the sense of awkwardness or self-consciousness that they felt talking about Auschwitz in public and replaced these negative feelings with a positive sense of heightened self-regard. Brede argued that Walser used jargon, in the sense of Adorno’s “jargon of authenticity,” in order to flatter listeners into thinking that they were taking part in a daring intellectual exercise, while in fact activating anti-intellectual feelings. (In a footnote she recommended an “unpublished paper” by Karp, presumably from his dissertation, for further reading; Brede 2000, 215). She concluded that indeed Walser had taken the side of antisemites because, in both his speech and his subsequent debate with Bubis, he constructed a point of identification for listeners (“we Germans”) that systematically excluded German Jews (203). By organizing his speech entirely around “perpetrators” and the “critics” who shamed them, Walser elided the perspective of the Nazi’s victims. Invoking Simmel’s essay on “The Stranger” again, Brede argued that Walser’s behavior during his debate with Bubis offered a model of how unconscious aggression could drive social integration through exclusion. Regardless of what Walser said he felt, to the extent that his rhetoric excluded Bubis from his definition of “we Germans” as a Jew, his conduct had been antisemitic.

    In the final chapter of his dissertation, Karp also offers a reading of Walser’s prize acceptance speech, arguing that Walser made use of jargon in Adorno’s sense. Like Brede, Karp bases his argument on close textual analysis. He catalogs several specific literary strategies that, he says, enabled Walser to appeal to the unconscious or repressed emotions of his listeners without having to convince them. First, Karp tracks how Walser played with pronouns in the opening movement of the speech in order to eliminate distance and create identification between himself and his audience. Walser shifted from describing himself in the third person singular (the “one who had been chosen” for the prize) to the first-person plural (“we Germans”). At the same time, by making vague references to intellectuals who had made public remembrance and guilt compulsory, Walser created the sensation that he and the listeners he has invited to identify with his position (“we”) were only responding to attacks from outside—that “we” were the real victims. (In her article, Brede had quipped that this narrative of victimhood “could have come from a B-movie Western”; Brede 2000, 214). Through this technique, Karp writes, Walser created the impression that if “we” were to strike back against the “Holocaust preachers,” this would only be an act of self-defense.

    Karp stresses that the content of “Experiences Composing a Sermon” was less important than the effect that these rhetorical gestures had of making listeners feel that they belonged to Walser’s side. In the controversy that followed Walser’s acceptance speech, critics often asked which “intellectuals” he had meant to criticize; these critics, Karp says, missed the point. It was not the content of the speech, but its form, that mattered. It was through form that Walser had identified and addressed the psychological needs of his audience. That form did not aim to convince listeners; it did not need to. It simply appealed to (repressed) emotions that they were already experiencing.

    For Adorno, the anti-political or fascist character of jargon was directly tied to the non-dialectical concept of language that jargon advanced. By eliminating abstraction from philosophical language, and detaching selected words from the flow of thought, jargon made absent things seem present. By using such language, existentialism attempted to construct an illusion that the subject could form itself outside of history. By raising historically contingent experiences of domination to defining features of the human, jargon presented them as unchangeable. And by identifying humanity itself with those experiences, it identified the subject with domination.

    Karp does not demonstrate that Walser’s “jargon” performed any of these functions, precisely. Rather, he focuses on the psychodynamics motivating his speech. Karp proposes that the pain (Leiden) that Walser’s speech expressed resembled the “domination” (Zwang) that Adorno recognized in jargon. While Adorno’s jargon made the absent or abstract seem present, through an act of linguistic fetishization, Walser’s jargon embodied the obverse impulse: to wish the discomfort created by the presence of history’s victims away.

    Karp is less concerned with the history of domination, that is, than with Freudian drives. For Adorno, the purpose of carrying out a determinate negation of jargon was to create the conditions of possibility for critical theory to address the real needs to which jargon constituted a false response. For Karp, the interest of the project is more technical: his goal is to uncover forms and patterns of speech that admit aggression into social life and give it a central role in consolidating identity. By combining culturally legitimated expressions with taboo ones, Karp argues, Walser created an environment in which his controversial opinion could be accepted as “obvious” or “self-evident” (selbstverständlich) by his audience. That is, Walser created a linguistic form through which aggression could be integrated into the life-world.

    Unlike Adorno (or Brede), Karp refrains from making any normative assessment of this achievement. His “systematization” of the concept of jargon empties that concept of the critical force that Adorno meant for it to carry. If anything, the tone of the final pages of Aggression in the Life-World is forgiving. Karp concludes by arguing that Walser was not necessarily aware of the meaning of his speech—indeed, that he probably was not. By allowing his audience to express their taboo wishes to be done with Holocaust remembrance, Karp writes, Walser convinced them that, “these taboos should never have existed.” Then he cuts to his bibliography.

    Grand Hotel California Abyss

    The abruptness of the ending of Aggression in the Life-World is difficult to interpret. At one level, Karp’s apparent lack of interest in the ethical and political implications of his case study reflects his stated goals and methods. From the beginning, he has set out to reveal that the social is constituted through acts of unconscious aggression, and that this aggression becomes legible in specific linguistic interactions, rather than to evaluate the effects of aggression itself. Reading Walser, Karp explicitly privileges form over content, treating the former as symptomatic of unstated meanings and effects. Granting the critic authority over the text he is analyzing, such an approach presumes the author under analysis to be ignorant, if not innocent, of what he really has at stake; it treats conscious attitudes and overt arguments as holding, at most, a secondary interest. At another level, the banal explanations for Karp’s tone and brevity may be the most plausible. He was writing in a non-native language; like many graduate students, he may have finished in haste.[9] In any case, his decision to eschew the kinds of judgments made by both his subject, Adorno, and his mentor, Brede is striking—all the more so because Karp is descended from German Jews and “grew up in a Jewish family” (Karp 2019a). This choice reflects a different mode of engagement with critical theory than scholars of either digital media or digitally mediated right-wing movements have observed.

    Historians have shown that the Frankfurt School critiques of mass media helped shape the idea that digital media could constitute a more democratic alternative. Fred Turner has argued that the research Adorno conducted on the role of radio and cinema in shaping the authoritarian personality, as well as the proximity of Frankfurt School scholars to the Bauhaus and other practicing artists, generated a set of beliefs about the democratic character of interactivity (Turner 2013). Orit Halpern is more critical of the essentially liberal assumptions of media and technology critique in which she, too, places Adorno (2015, 18-19). However, like Turner, Halpern identifies the emergence of interactivity as a key epistemic shift away from the Frankfurt School paradigm that opposed “attention” and “distraction.”  Cybernetics redefined the problem of “spectatorship” by transforming the spectator from an individual into a site of perceptions and cognitions—an “interface or infrastructure for information processing.” Where radio, cinema, and television had promoted conformity and passivity, cybernetic media promised to facilitate individual choice and free expression (2015, 224-6).

    More recently, critics and scholars attempting to account for the phobic fascination that new right-wing movements show for “cultural marxism” have analyzed it in a variety of ways. The least sophisticated take at face value the claims of “alt-right” figures that they are only reacting to the ludicrous and pernicious excesses of their opponents.[10] More substantial interpretations have described the far right fixation on the Frankfurt School as a “dialectic of counter-Enlightenment” or form of “inverted appropriation.” Martin Jay (2011) and Andreas Huyssen (2017, 2019) both argue that the attraction of critical theory for the right lies in the dynamics of projection and disavowed recognition that it sets in motion. As Huyssen puts it, “wider circles of American white supremacists and their publications… have been drawn to critique and deconstruction because, on those traditions, they project their own destructive and nihilistic tendencies” (2017).

    Aggression in the Life World does none of these things. Karp’s dissertation does not take up the critiques of mass media or the authoritarian personality that were canonized in the Anglo-American world at all, much less use them to develop democratic alternatives. Nor does it project its own penchant for destruction onto its subjects. In contrast with the “lunatic fringe” (Jay, 30) Karp does not carry out an “inverted appropriation” of critical theory, so much as a partial one.  He adapts Frankfurt School concepts for technical purposes, making them more instrumentally useful to the disciplines of sociology or social psychology by abstracting them from their contexts. In the process, he also abandons the Frankfurt School commitment to emancipation. It is at this level of abstraction that his neo-Marxism—from which Marx and materialism have all but disappeared—can coexist with the nationalism that he and Thiel invoke to defend Palantir.

    I asked at the beginning of this paper what beliefs Karp shares with Peter Thiel and what their common commitments might reveal about the self-consciously “contrarian” or “heterodox” network of actors that they inhabit. One answer that Aggression in the Life World makes evident is that both men regard the desire to commit violence as a constant, founding fact of human life. Both also believe that this drive expresses itself in social forms like language or group structure, even if speakers or group members remain unaware of their own motivations. These are ideas that Thiel attributes to the work of the eclectic French theorist René Girard, with whom he studied at Stanford, and whose theories of mimetic desire, scapegoating, and herd mentality he has often cited. In 2006 Thiel’s nonprofit foundation established an institute to promote the study of Girard and support the further development of mimetic theory; this organization, Imitatio, remains one of the foundation’s three major projects (Daub 2020, 97-112).

    The text that Karp chose to analyze, as his case study, also shares a set of concerns with Thiel’s writings and statements against campus multiculturalism and political correctness; Walser’s speech became a touchstone of debates about historical memory in Germany, in which the newly imported Americanism politische Korrektheit circulated widely. In his dissertation, Karp does not celebrate Walser’s taboo speech in the same way that Thiel and his associates have sometimes celebrated violations of speech norms.[11] However, he does assert that jargon, and the unconscious aggression that it expresses, plays a role in the formation of all social groups, and refrains from evaluating whether Walser’s jargon was particularly problematic. Of course, the term “jargon” itself became a commonplace during the U. S. culture wars in the 1980s and 1990s, used to accuse academics and university administrators who purported to be speaking for vulnerable populations of in fact deploying obscure terms to aggrandize themselves. Thiel and his co-author David O. Sacks devote a chapter of The Diversity Myth to an account of how the vagueness of the word “multiculturalism” enabled activists and administrators at Stanford to use it in this manner (1995, 23-49). The idea that such terms express ressentiment and a will to power is consistent with the theoretical framework that Karp went on to develop.

    Ironically, by attempting to expunge jargon of its subjective or impressionistic content, Karp renders it less materially objective. Rather than locating jargon in specific experiences of modernity, he transforms it into an expression of drives that, because they are timeless, are merely psychological. Karp makes a version of the eternalizing move that Adorno criticizes in Heidegger, in other words. Rather than elevating precarity into the essence of the human, Karp makes aggressive violence the substance of the social. In the process, he empties the concept of jargon of its critical power. When he arrives at the end of Walser’s speech, a speech that Karp characterizes as consolidating community based on unspeakable aggression, he can conclude only that it was effective.

    A still greater irony in retrospect may be how, in Karp’s telling, Adorno’s jargon anticipates the software tools Palantir would develop. By tracing the rhetorical patterns that constitute jargon in literary language, Karp argues that he can reveal otherwise hidden identities and affinities—and the drive to commit violence that lies latent in them. By looking back to Adorno, he points toward a possible critique of big data analytics as a kind of authenticity jargon. That is, a way of generating and eternalizing false forms of selfhood. In data analysis, the role of the analyst is not to demystify and dispel reification. On the contrary, it is precisely to fix identity from its digital traces and to make predictions on the basis of the same. For Adorno, jargon is a form of language that seems to authenticate identity—but only seems to. The identities it makes available to the subject are based on an illusion that jargon sustains by suppressing the self-difference that historicity introduces into language. The illusion it offers is of timeless “human” experience. It covers for domination insofar as it makes the human condition—or rather, human conditions as they are at the time of speaking—appear unchangeable.

    Big data analytics could be said to constitute an authenticity jargon in this sense: although they treat the data set under analysis as having something like an unconscious, they eliminate the temporal gaps and spaces of ambiguity that drive psychoanalytic interpretation. In place of interpretation, data analytics substitutes correlations that it treats simply as given. To a machine learning algorithm that has been trained on data sets that include zip codes and rates of defaulting on mortgage payments, for instance, it does not matter why mortgagees in a given zip code may have been more likely to default in the past. Nor will the algorithm that recommends rejecting a loan application necessarily explain that the zip code was the deciding factor. Like the existentialist’s illusion of immediate experience these procedures generate an aura of incontestable self-evidence.

    As in Adorno, here, the loss of particular contexts can serve to conceal, and thus perpetuate, domination. Algorithms take the histories of oppression embedded in training data and project them into the future, via predictions that powerful institutions then act on. If the identities constituted in this way are false, the reifications they generate do real work, and can cause real harm. And yet, to read these figures historically is to recognize that they need not come true. This is not an interpretive path that Karp pursues. But for those of us concerned about the relationship between digital technologies and justice, this repressed insight of his dissertation is the most critical to follow.

    _____

    Moira Weigel is a Junior Fellow at the Harvard Society of Fellows and an editor and cofounder of Logic Magazine. She received her PhD from the combined program in Comparative Literature and Film and Media Studies at Yale University in 2017.

    Back to the essay

    _____

    Notes

    [1] Translations from German are mine unless otherwise noted.

    [2] In 2017, when activists doxxed the founder of the neofascist blog the Right Stuff and the antisemitic podcasts Fash the Nation and The Daily Shoah, who went by the alias Mike Enoch, they revealed that he was in fact a programmer named Michael Peinovich (Marantz 2019, 275-9). Curtis Yarvin, who wrote a widely read blog advocating the end of democracy under the name Mencius Moldbug, also worked as a software engineer (Gray 2017). Several journalists have documented the interest that figures in or adjacent to the tech industry evince with Yarvin’s Neoreaction (NRx) or Dark Enlightenment (Gray 2017; Goldhill 2017). Prominent white nationalist media entrepreneurs also claim to have substantial followings in the tech industry. In 2017, Andrew Anglin told a Mother Jones reporter that Santa Clara County was the highest source of inbound traffic to his website, The Daily Stormer; Chuck Johnson said the same about his (now defunct) website Got News (Harkinson 2017). In response to an interview question about his “average” supporter, the white nationalist Richard Spencer claimed that, “many in the Alt-Right are tech savvy or actually tech professionals” (Hawley 2017, 78).

    [3] James Damore, the engineer who wrote the July 2017 memo, “Google’s Ideological Echo Chamber,” and was subsequently fired, toured the right wing speaking circuit (Tiku 2019, 85-7). Brian Amerige, the Facebook engineer who identified himself to the New York Times in July 2018 as the creator of a conservative group on Facebook’s internal forum, Workplace, and then left the company, did the same (Conger and Frankel 2018). Shortly after, it was reported that Oculus cofounder Palmer Luckey’s departure from the company in 2017 had also been driven by conflicts with management over his support of Donald Trump (Grind and Hagey 2018); Luckey has since publicly claimed to speak on behalf of a silent majority of “tech conservatives” (Luckey 2018). Arne Wilberg, a long time recruiter of technical employees for Google and YouTube, filed a reverse discrimination suit in 2018, alleging that he had been fired for “opposing illegal hiring practices… systematically discriminating in favor of job applicants who are Hispanic, African American, or female, against Caucasian and Asian men” (Wilberg v. Google 2018). Most recently, in August 2019, The Wall Street Journal reported that the former Google engineer Kevin Cernekee had been fired in 2017 in retaliation for expressing “conservative” viewpoints on internal listservs (Copeland 2019). Former colleagues subsequently published screenshots showing that, among other things, Cernekee had proposed raising money for a bounty for finding the masked protestor who punched Richard Spencer at the Presidential inauguration in 2017 using WeSearchr, the now-defunct fundraising platform run by Holocaust “revisionist” Chuck C. Johnson. They also shared screenshots showing that Cernekee had defended two neo-Nazi organizations, The Traditionalist Workers Party and Golden State Skinheads, suggesting that they should “rename themselves to something normie-compatible like ‘The Helpful Neighborhood Bald Guys’ or the ‘Open Society Institute’” (Wacker 2019; Tiku 2019, 84). Like Damore, Amerige, and Wilberg, Cernekee received national media coverage.

    [4] For instance, emails that BuzzFeed reporter Joe Bernstein obtained from Breitbart.com stated that Thiel invited Curtis Yarvin to watch the 2016 election results at his home in Hollywood Hills, where he had previously hosted Breitbart tech editor Milo Yiannopoulos; New Yorker writer Andrew Marantz reported running into Thiel at the “DeploraBall” that took place on the eve of Trump’s inauguration (2019, 47-9).

    [5] Thiel supported Hawley’s campaign for Attorney General of Missouri in 2016 (Center for Responsive Politics); in that office, Hawley initiated an antitrust investigation of Google (Dave 2017) and a probe into Facebook exploitation of user data (Allen 2018). Thiel later donated to Hawley’s 2018 Senate campaign (Center for Responsive Politics); in the Senate, Hawley has sponsored multiple bills to regulate tech platforms (US Senate 2019a, 2019b, 2019c, 2019d, 2019e, 2019f, 2019g). These activities earned him praise from Trump at a White House Social Media Summit on the theme of liberal bias at tech companies, where Hawley also spoke (Trump 2019a).

    [6] Pat Buchanan devoted a chapter to the subject, entitled “The Frankfurt School Comes to America,” in his 2001 Death of the West. Breitbart editor Michael Walsh published an entire book about critical theory, in which he described it as “the very essence of Satanism” (Walsh 2016, 50). Andrew Breitbart himself devoted a chapter to it in his memoir (Breitbart 2011, 113). Jordan Peterson more often rails against “postmodernism,” or “political correctness.” However, he too regularly refers to “Cultural Marxism”; at time of writing, an explainer video that he produced for the pro-Trump Epoch Times, has tallied nearly 750,000 views on YouTube (Peterson 2017).

    [7] The memo that engineer James Damore circulated to his colleagues at Google presented a version of the Cultural Marxism conspiracy in its endnotes, as fact. “As it became clear that the working class of the liberal democracies wasn’t going to overthrow their ‘capitalist oppressors,’” Damore wrote, “the Marxist intellectuals transitioned from class warfare to gender and race politics” (Conger 2017). The group that Brian Amerige started on Facebook Workplace was called “Resisting Cultural Marxism” (Conger and Frankel 2018).

    [8] The Stanford Review, which Thiel founded late in his sophomore year and edited throughout his junior and senior years at the university, devoted extensive attention to questions of speech on Stanford’s campus, which became a focal point of the US culture wars and drew international media attention when the academic senate voted to (slightly) revise its core curriculum in 1988 (see Hartman 2019, 227-30). In 1995, with fellow Stanford alumnus (and later PayPal Chief Operating Officer) David O. Sacks, Thiel published The Diversity Myth, a critique of the “debilitating” effects of “political correctness” on college campuses that, among other things, compared multicultural campus activists to “the bar scene from Star Wars” (xix). In 2018 he moved to Los Angeles, saying that political correctness in San Francisco had become unbearable (Peltz and Pierson 2018; Solon 2018) and in 2019 Founders Fund, the venture capital firm where he is a partner, announced that they would be sponsoring a conference to promote “thoughtcrime” (Founders Fund 2019).

    [9] Aggression in the Life World is significantly shorter than either of the other two dissertations submitted to the sociology department at Frankfurt that year: Margaret Ann Griesese’s The Brazilian Women’s Movement Against Violence clocked in at 314 pages, and Konstantinos Tsapakidis, Collective Memory and Cultures of Resistance in Ancient Greek Music at 267; Karp’s is 129.

    [10] Angela Nagle (2017) put forth an extreme version of this argument, arguing that the excesses of “social justice warrior” identity politics provoked the formation of the alt-right and that trolls like Milo Yiannopoulos were only replicating tactics of “transgression” that had been pioneered by leftist intellectuals like bell hooks and institutionalized on liberal campuses and in liberal media. Kakutani similarly argued that the Trumpist right was simply taking up tactics that the relativism of “postmodernism” had pioneered in the 1960s (2018, 18).

    [11] In The Diversity Myth Sacks and Thiel describe on instance of resistance to the Stanford speech code, which was adopted in May 1990 and revoked in March 1995, as heroic. The incident took place on the night of January 19, 1992, when three members of the Alpha Epsilon Pi fraternity, Michael Ehrman, Keith Rabois, and Bret Scher, were walking home from a party through one of Stanford’s residential dormitories. Rabois, then a first year law student, began shouting slurs at the home of a resident tutor in the dormitory, who had been involved in the expulsion of Ehrman’s brother Ken from residential housing four years earlier, after Ken called the resident tutor assigned to him a “faggot.” “Faggot! Hope you die of AIDS!” Rabois shouted. “Can’t wait until you die, faggot.” He later confirmed and defended these statements in a letter to the Stanford Daily. “Admittedly, the comments made were not very articulate, nor very intellectual nor profound,” he wrote. “The intention was for the speech to be outrageous enough to provoke a thought of ‘Wow, if he can say that, I guess I can say a little more than I thought.” The speech code, which had not until that point been used to punish any student, was not used to punish Rabois; however, Thiel and Sacks describe the criticism of Rabois from administrators and fellow students that followed as a “witch hunt” (1995, 162-75). Rabois subsequently transferred to Harvard but later worked with Thiel at PayPal and later as a partner at Founders Fund. More recently, the blog post that Founders Fund published to announce the Hereticon conference cited in Footnote 8, described violating taboos on speech as its goal: “Imagine a conference for people banned from other conferences. Imagine a safe space for people who don’t feel safe in safe spaces. Over three nights we’ll feature many of our culture’s most important troublemakers in the fields of knowledge necessary to the progressive improvement of our civilization” (2019).

    _____

    Works Cited

  • Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    a review of Bernard Harcourt, Exposed: Desire and Disobedience in the Digital Age (Harvard, 2015)

    by Richard Hill

    ~

    This is a seminal and important book, which should be studied carefully by anyone interested in the evolution of society in light of the pervasive impact of the Internet. In a nutshell, the book documents how and why the Internet turned from a means to improve our lives into what appears to be a frightening dystopia driven by the collection and exploitation of personal data, data that most of us willingly hand over with little or no care for the consequences. “In our digital frenzy to share snapshots and updates, to text and videochat with friends and lovers … we are exposing ourselves‒rendering ourselves virtually transparent to anyone with rudimentary technological capabilities” (page 13 of the hardcover edition).

    The book meets its goals (25) of tracing the emergence of a new architecture of power relations; to document its effects on our lives; and to explore how to resist and disobey (but this last rather succinctly). As the author correctly says (28), metaphors matter, and we need to re-examine them closely, in particular the so-called free flow of data.

    As the author cogently points out, quoting Media Studies scholar Siva Vaidhyanathan, we “assumed digitization would level the commercial playing field in wealthy economies and invite new competition into markets that had always had high barriers to entry.” We “imagined a rapid spread of education and critical thinking once we surmounted the millennium-old problems of information scarcity and maldistribution” (169).

    “But the digital realm does not so much give us access to truth as it constitutes a new way for power to circulate throughout society” (22). “In our digital age, social media companies engage in surveillance, data brokers sell personal information, tech companies govern our expression of political views, and intelligence agencies free-ride off e-commerce. … corporations and governments [are enabled] to identify and cajole, to stimulate our consumption and shape our desires, to manipulate us politically, to watch, surveil, detect, predict, and, for some, punish. In the process, the traditional limits placed on the state and on governing are being eviscerated, as we turn more and more into marketized malleable subjects who, willingly or unwillingly, allow ourselves to be nudged, recommended, tracked, diagnosed, and predicted by a blurred amalgam of governmental and commercial initiative” (187).

    “The collapse of the classic divide between the state and society, between the public and private sphere, is particular debilitating and disarming. The reason is that the boundaries of the state had always been imagined in order to limit them” (208). “What is emerging in the place of separate spheres [of government and private industry] is a single behemoth of a data market: a colossal market for personal data” (198). “Knots of statelike power: that is what we face. A tenticular amalgam of public and private institutions … Economy, society, and private life melt into a giant data market for everyone to trade, mine, analyze, and target” (215). “This is all the more troubling because the combinations we face today are so powerful” (210).

    As a consequence, “Digital exposure is restructuring the self … The new digital age … is having profound effects on our analogue selves. … it is radically transforming our subjectivity‒even for those, perhaps even more, who believe they have nothing to fear” (232). “Mortification of the self, in our digital world, happens when subjects voluntarily cede their private attachments and their personal privacy, when they give up their protected personal space, cease monitoring their exposure on the Internet, let go of their personal data, and expose their intimate lives” (233).

    As the book points out, quoting Software Freedom Law Center founder Eben Moglen, it is justifiable to ask whether “any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the United States government has led not only its people but the world” (254). “This is a different form of despotism, one that might take hold only in a democracy: one in which people loose the will to resist and surrender with broken spirit” (255).

    The book opens with an unnumbered chapter that masterfully reminds us of the digital society we live in: a world in which both private companies and government intelligence services (also known as spies) read our e-mails and monitor our web browsing. Just think of “the telltale advertisements popping up on the ribbon of our search screen, reminding us of immediately past Google or Bing queries. We’ve received the betraying e-mails in our spam folders” (2). As the book says, quoting journalist Yasha Levine, social media has become “a massive surveillance operation that intercepts and analyses terabytes of data to build and update complex psychological profiles on hundreds of millions of people all over the world‒all of it in real time” (7). “At practically no cost, the government has complete access to people’s digital selves” (10).

    We provide all this data willingly (13), because we have no choice and/or because we “wish to share our lives with loved ones and friends” (14). We crave digital connections and recognition and “Our digital cravings are matched only by the drive and ambition of those who are watching” (14). “Today, the drive to know everything, everywhere, at every moment is breathtaking” (15).

    But “there remain a number of us who continue to resist. And there are many more who are ambivalent about the loss of privacy or anonymity, who are deeply concerned or hesitant. There are some who anxiously warn us about the dangers and encourage us to maintain reserve” (13).

    “And yet, even when we hesitate or are ambivalent, it seems there is simply no other way to get things done in the new digital age” (14), be it airline tickets, hotel reservations, buying goods, booking entertainment. “We make ourselves virtually transparent for everyone to see, and in so doing, we allow ourselves to be shaped in unprecedented ways, intentionally or wittingly … we are transformed and shaped into digital subjects” (14). “It’s not so much a question of choice as a feeling of necessity” (19). “For adolescents and young adults especially, it is practically impossible to have a social life, to have friends, to meet up, to go on dates, unless we are negotiating the various forms of social media and mobile technology” (18).

    Most have become dulled by blind faith in markets, the neoliberal mantra (better to let private companies run things than the government), fear of terrorism‒dulled into believing that, if we have nothing to hide, then there is nothing to fear (19). Even though private companies, and governments, know far more about us than a totalitarian regime such as that of East Germany “could ever have dreamed” (20).

    “We face today, in advanced liberal democracies, a radical new form of power in a completely altered landscape of political and social possibilities” (17). “Those who govern, advertise, and police are dealing with a primary resource‒personal data‒that is being handed out for free, given away in abundance, for nothing” (18).

    According to the book “There is no conspiracy here, nothing untoward.” But the author probably did not have access to Shawn M. Powers and Michael Jablonski’s The Real Cyberwar: The Political Economy of Internet Freedom (2015), published around the same time as Harcourt’s book, which shows that actually the current situation was created, or at least facilitated, by deliberate actions of the US government (which were open, not secret), resulting in what the book calls, quoting journalist James Bamford, “a surveillance-industrial empire” (27).

    The observations and conclusions outlined above are meticulously justified, with numerous references, in the numbered chapters of the book. Chapter 1 explains how analogies of the current surveillance regime to Orwell’s 1984 are imperfect because, unlike in Orwell’s imagined world, today most people desire to provide their personal data and do so voluntarily (35). “That is primarily how surveillance works today in liberal democracies: through the simplest desires, curated and recommended to us” (47).

    Chapter 2 explains how the current regime is not really a surveillance state in the classical sense of the term: it is a surveillance society because it is based on the collaboration of government, the private sector, and people themselves (65, 78-79). Some believe that government surveillance can prevent or reduce terrorist attacks (55-56), never mind that it might violate constitutional rights (56-57), or be ineffective, or that terrorist attacks in liberal democracies have resulted in far fewer fatalities than, say, traffic accidents or opiod overdose.

    Chapter 3 explains how the current regime is not actually an instantiation of Jeremy Bentham’s Panopticon, because we are not surveilled in order to be punished‒on the contrary, we expose ourselves in order to obtain something we want (90), and we don’t necessarily realize the extent to which we are being surveilled (91). As the book puts it, Google strives “to help people get what they want” by collecting and processing as much personal data as possible (103).

    Chapter 4 explains how narcissism drives the willing exposure of personal data (111). “We take pleasure in watching [our friends], ‘following’ them, ‘sharing’ their information‒even while we are, unwittingly, sharing our every keyboard stroke” (114). “We love watching others and stalking their digital traces” (117).

    Yet opacity is the rule for corporations‒as the book says, quoting Frank Pasquale (124-125), “Internet companies collect more and more data on their users but fight regulations that would let those same users exercise some control over the resulting digital dossiers.” In this context, it is worth noting the recent proposals, analyzed here, here, and here, to the World Trade Organization that would go in the direction favored by dominant corporations.

    The book explains in summary fashion the importance of big data (137-140). For an additional discussion, with extensive references, see sections 1 of my submission to the Working Group on Enhanced Cooperation. As the book correctly notes, “In the nineteenth century, it was the government that generated data … But now we have all become our own publicists. The production of data has become democratized” (140).

    Chapter 5 explains how big data, and its analysis, is fundamentally different from the statistics that were collected, analyzed, and published in the past by governments. The goal of statistics is to understand and possibly predict the behavior of some group of people who share some characteristics (e.g. they live in a particular geographical area, or are of the same age). The goal of big data is to target and predict individuals (158, 161-163).

    Chapter 6 explains how we have come to accept the loss of privacy and control of our personal data (166-167). A change in outlook, largely driven by an exaggerated faith in free enterprise (168 and 176), “has made it easier to commodify privacy, and, gradually, to eviscerate it” (170). “Privacy has become a form of private property” (176).

    The book documents well the changes in the US Supreme Court’s views of privacy, which have moved from defending a human right to balancing privacy with national security and commercial interests (172-175). Curiously, the book does not mention the watershed Smith vs. Maryland case, in which the US Supreme Court held that telephone metadata is not protected by the right to privacy, nor the US Electronic Communications Privacy Act, under which many e-mails are not protected either.

    The book mentions the incestuous ties between the intelligence community, telecommunications companies, multinational companies, and military leadership that have facilitated the implementation of the current surveillance regime (178); these ties are exposed and explained in greater detail in Powers and Jablonski’s The Real Cyberwar. This chapter ends with an excellent explanation of how digital surveillance records are in no way comparable to the old-fashioned paper files that were collected in the past (181).

    Chapter 7 explores the emerging dystopia, engendered by the fact that “The digital economy has torn down the conventional boundaries between governing, commerce, and private life” (187). In a trend that should be frightening, private companies now exercise censorship (191), practice data mining on scales that are hard to imagine (194), control worker performance by means beyond the dreams of any Tayorlist (196), and even aspire to “predict consumer preferences better than consumers themselves can” (198).

    The size of the data brokerage market is huge and data on individuals is increasingly used to make decision about them, e.g. whether they can obtain a loan (198-208). “Practically none of these scores [calculated from personal data] are revealed to us, and their accuracy is often haphazard” (205). As noted above, we face an interdependent web of private and public interests that collect, analyze, refine, and exploit our personal data‒without any meaningful supervision or regulation.

    Chapter 8 explains how digital interactions are reconfiguring our self-images, our subjectivity. We know, albeit at times only implicitly, that we are being surveilled and this likely affects the behavior of many (218). Being deprived of privacy affects us, much as would being deprived of property (229). We have voluntarily given up much of our privacy, believing either that we have no choice but to accept surveillance, or that the surveillance is in our interests (233). So it is our society as a whole that has created, and nurtures, the surveillance regime that we live in.

    As shown in Chapter 9, that regime is a form of digital incarceration. We are surveilled even more closely than are people obliged by court order to wear electronic tracking devices (237). Perhaps a future smart watch will even administer sedatives (or whatever) when it detects, by analyzing our body functions and comparing with profiles downloaded from the cloud, that we would be better off being sedated (237). Or perhaps such a watch will be hijacked by malware controlled by an intelligence service or by criminals, thus turning a seemingly free choice into involuntary constraints (243, 247).

    Chapter 10 show in detail how, as already noted, the current surveillance regime is not compatible with democracy. The book cites Tocqueville to remind us that democracy can become despotic, and result is a situation where “people lose the will to resist and surrender with broken spirit” (255). The book summarily presents well-known data regarding the low voter turnouts in the United States, a topic covered in full detail in Robert McChesney’s  Digital Disconnect: How Capitalism is Turning the Internet Against Democracy (2014) which explains how the Internet is having a negative effect on democracy. Yet “it remains the case that the digital transparency and punishment issues are largely invisible to democratic theory and practice” (216).

    So, what is to be done? Chapter 11 extols the revelations made by Edward Snowden and those published by Julian Assange (WikiLeaks). It mentions various useful self-help tools, such as “I Fight Surveillance” and “Security in a Box” (270-271). While those tools are useful, they are not at present used pervasively and thus don’t really affect the current surveillance regime. We need more emphasis on making the tools available and on convincing more people to use them.

    As the book correctly says, an effective measure would be to carry the privatization model to its logical extreme (274): since personal data is valuable, those who use it should pay us for it. As already noted, the industry that is thriving from the exploitation of our personal data is well aware of this potential threat, and has worked hard to attempt to obtain binding international norms, in the World Trade Organization, that would enshrine the “free flow of data”, where “free” in the sense of freedom of information is used as a Trojan Horse for the real objective, which is “free” in the sense of no cost and no compensation for those the true owners of the data, we the people. As the book correctly mentions, civil society organizations have resisted this trend and made proposals that go in the opposite direction (276), including a proposal to enshrine the necessary and proportionate principles in international law.

    Chapter 12 concludes the book by pointing out, albeit very succinctly, that mass resistance is necessary, and that it need not be organized in traditional ways: it can be leaderless, diffuse, and pervasive (281). In this context, I refer to the work of the JustNet Coalition and of the fledgling Internet Social Forum (see also here and here).

    Again, this book is essential reading for anybody who is concerned about the current state of the digital world, and the direction in which it is moving.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    by Zachary Loeb

    ~

    Speculative fiction is littered with fantastical tales warning of the dangers that arise when things get, to put it amusingly, too big. A researcher loses control of their experiment! A giant lizard menaces a city! Massive computer networks decide to wipe out humanity! A horrifying blob metastasizes as it incorporates all that it touches into its gelatinous self!

    Such stories generally contain at least a faint hint of the absurd. Nevertheless, silly stories can still contain important lessons, and some of the morals that one can pull from such tales are: that big things keep getting bigger, that big things can be very dangerous, and that sometimes things that get very big very fast wind up doing a fair amount of damage as what appears to be controlled growth is revealed to actually be far from managed. It may not necessarily always be a case of too big, as in size, but speculative fiction features no shortage of tragic characters who accidentally unleash some form of horror upon an unsuspecting populace because things were too big for that sorry individual to control. The mad scientist has a sad corollary in the figure of the earnest scientist who wails “I meant well” while watching their creation slip free from their grasp.

    Granted, if you want to see such a tale of the dangers of things getting too big and the desperate attempts to maintain some sort of control you don’t need to go looking for speculative fiction.

    You can just look at Facebook.

    With its publication of The Facebook Files, The Guardian has pried back the smiling façade of Zuckerberg’s monster to reveal a creature that an overwhelmed staff is desperately trying to contain with less than clear insight into how best to keep things under control. Parsing through a host of presentations and guidelines that are given to Facebook’s mysterious legion of content moderators, The Facebook Files provides insight into how the company determines what is and is not permitted on the website. It’s a tale that is littered with details about the desperate attempt to screen things that are being uploaded at a furious rate, with moderators often only having a matter of seconds in which they can make a decision as to whether or not something is permitted. It is a set of leaks that are definitely worth considering, as they provide an exposé of the guidelines Facebook moderators use when considering whether things truly qualify as revenge porn, child abuse, animal abuse, self-harm, unacceptable violence, and more. At the very least, the Facebook Files are yet another reminder of the continuing validity of Erich Fromm’s wise observation:

    What we use is not ours simply because we use it. (Fromm 2001, 225)

    In considering the Facebook Files it is worthwhile to recognize that the moderators are special figures in this story – they are not really the villains. The people working as actual Facebook moderators are likely not the same people who truly developed these guidelines. In truth, they likely weren’t even consulted. Furthermore, the moderators are almost certainly not the high-profile Facebook executives espousing techno-utopian ideologies in front of packed auditoriums. To put it plainly, Mark Zuckerberg is not checking to see if the thousands of photos being uploaded every second fit within the guidelines. In other words, having a measure of sympathy for the Facebook moderators who spend their days judging a mountain of (often disturbing) content is not the same thing as having any sympathy for Facebook (the company) or for its figureheads. Furthermore, Facebook has already automated a fair amount of the moderating process, and it is more than likely that Facebook would love to be able to ditch all of its human moderators in favor of an algorithm. Given the rate at which it expects them to work it seems that Facebook already thinks of its moderators as being little more than cogs in its vast apparatus.

    That last part helps point to one of the reasons why the Facebook Files are so interesting – because they provide a very revealing glimpse of the type of morality that a machine might be programmed to follow. The Facebook Files – indeed the very idea of Facebook moderators – is a massive hammer that smashes to bits the idea that technological systems are somehow neutral, for it puts into clear relief the ways in which people are involved in shaping the moral guidelines to which the technological system adheres. The case of what is and is not allowed on Facebook is a story playing out in real time of a company (staffed by real live humans) trying to structure the morality of a technological space. Even once all of this moderating is turned over to an algorithm, these Files will serve as a reminder that the system is acting in accordance with a set of values and views that were programmed into it by people. And this whole tale of Facebook’s attempts to moderate sensitive/disturbing content points to the fact that morality can often be quite tricky. And the truth of the matter, as many a trained ethicist will attest, is that moral matters are often rather complex – which is a challenge for Facebook as algorithms tend to do better with “yes” and “no” than they do with matters that devolve into a lot of complex philosophical argumentation.

    Thus, while a blanket “zero nudity” policy might be crude, prudish, and simplistic – it still represents a fairly easy way to separate allowed content from forbidden content. Similarly, a “zero violence” policy runs the risk of hiding the consequences of violence, masking the gruesome realities of war, and covering up a lot of important history – but it makes it easy to say “no videos of killings or self-harm are allowed at all.” Likewise, a strong “absolutely no threats of any sort policy” would make it so that “someone shoot [specific political figure” and “let’s beat up people with fedoras” would both be banned. By trying to parse these things Facebook has placed its moderators in tricky territory – and the guidelines it provides them with are not necessarily the clearest. Had Facebook maintained a strict “black and white” version of what’s permitted and not permitted it could have avoided the swamp through which it is now trudging with mixed results. Again, it is fair to have some measure of sympathy for the moderators here – they did not set the rules, but they will certainly be blamed, shamed, and likely fired for any failures to adhere to the letter of Facebook’s confusing law.

    Part of the problem that Facebook has to contend with is clearly the matter of free speech. There are certainly some who will cry foul at any attempt by Facebook to moderate content – crying out that such things are censorship. While still others will scoff at the idea of free speech as applied to Facebook seeing as it is a corporate platform and therefore all speech that takes place on the site already exists in a controlled space. A person may live in a country where they have a government protected right to free speech – but Facebook has no such obligation to its users. There is nothing preventing Facebook from radically changing its policies about what is permissible. If Facebook decided tomorrow that no content related to, for example, cookies was to be permitted, it could make and enforce that decision. And the company could make that decision regarding things much less absurd than cookies – if Facebook wanted to ban any content related to a given protest movement it would be within its rights to do so (which is not to say that would be good, but to say that it would be possible). In short, if you use Facebook you use it in accordance with its rules, the company does not particularly care what you think. And if you run afoul of one of its moderators you may well find your account suspended – you can cry “free speech” but Facebook will retort with “you agreed to our terms of use, Facebook is a private online space.” Here, though, a person may try to fire back at Facebook that in the 21st century, to a large extent, social media platforms like Facebook have become a sort of new public square.

    And, yet again, that is part of the reason why this is all so tricky.

    Facebook clearly wants to be the new “public square” – it wants to be the space where people debate politics, where candidates have forums, and where activists organize. Yet it wants all of these “public” affairs to take place within its own enclosed “private” space. There is no real democratic control of Facebook, the company may try to train its moderators to respect various local norms but the people from those localities don’t get to have a voice in determining what is and isn’t acceptable. Facebook is trying desperately to have it all ways – it wants to be the key space of the public sphere while simultaneously pushing back against any attempts to regulate it or subject it to increased public oversight. As lackluster and problematic as the guidelines revealed by the Facebook Files are, they still demonstrate that Facebook is trying (with mixed results) to regulate itself so that it can avoid being subject to further regulation. Thus, free speech is both a sword and a shield for Facebook – it allows the company to hide from the accusations that the site is filled with misogyny and xenophobia behind the shield of “free speech” even as the site can pull out its massive terms of service agreement (updated frequently) to slash users with the blade that on the social network there is no free speech only Facebook speech. The speech that Facebook is most concerned with is its own, and it will say and do what it needs to say and do, to protect itself from constraints.

    Yet, to bring it back to the points with which this piece began, many of the issues that the Facebook Files reveal have a lot to do with scale. Sorting out the nuance of an image or a video can take longer than the paltry few seconds most moderators are able to allot to each image/video. And it further seems that some of the judgments that Facebook is asking its moderators to make have less to do with morality or policies than they have to do with huge questions regarding how the moderator can possibly know if something is in accordance with the policies or not. How does a moderator not based in a community really know if something is up to a community’s standard? Facebook is hardly some niche site with a small user base and devoted cadre of moderators committed to keeping the peace – its moderators are overworked members of the cybertariat (a term borrowed from Ursula Huws), the community they serve is Facebook not those from whence the users hail. Furthermore, some of the more permissive policies – such as allowing images of animal abuse – couched under the premise that they may help to alert the authorities seems like more of an excuse than an admission of responsibility. Facebook has grown quite large, and it continues to grow. What it is experiencing is not so much a case of “growing pains” as it is a case of the pains that are inflicted on a society when something is allowed to grow out of control. Every week it seems that Facebook becomes more and more of a monopoly – but there seems to be little chance that it will be broken up (and it is unclear what that would mean or look like).

    Facebook is the science project of the researcher which is always about to get too big and slip out of control, and the Facebook Files reveal the company’s frantic attempt to keep the beast from throwing off its shackles altogether. And the danger there, from Facebook’s stance, is that – as in all works where something gets too big and gets out of control – the point when it loses control is the point where governments step in to try to restore order. What that would look like in this case is quite unclear, and while the point is not to romanticize regulation the Facebook Files help raise the question of who is currently doing the regulating and how are they doing it? That Facebook is having such a hard time moderating content on the site is actually a pretty convincing argument that when a site gets too big, the task of carefully moderating things becomes nearly impossible.

    To deny that Facebook has significant power and influence is to deny reality. While it’s true that Facebook can only set the policy for the fiefdoms it controls, it is worth recognizing that many people spend a heck of a lot of time ensconced within those fiefdoms. The Facebook Files are not exactly a shocking revelation showing a company that desperately needs some serious societal oversight – rather what is shocking about them is that they reveal that Facebook has been allowed to become so big and so powerful without any serious societal oversight. The Guardian’s article leading into the Facebook Files quotes Monika Bickert, ‎Facebook’s head of global policy management, as saying that Facebook is:

    “not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used.”

    But a question lingers as to whether or not these policies are really reflective of responsibility in any meaningful sense. Facebook may not be a “traditional” company in many respects, but one area in which it remains quite hitched to tradition is in holding to a value system where what matters most is the preservation of the corporate brand. To put it slightly differently, there are few things more “traditional” than the monopolistic vision of total technological control reified in Facebook’s every move. In his classic work on the politics of technology, The Whale and the Reactor, Langdon Winner emphasized the need to seriously consider the type of world that technological systems were helping to construct. As he put it:

    We should try to imagine and seek to build technical regimes compatible with freedom, social justice, and other key political ends…If it is clear that the social contract implicitly created by implementing a particular generic variety of technology is incompatible with the kind of society we deliberately choose—that is, if we are confronted with an inherently political technology of an unfriendly sort—then that kind of device or system ought to be excluded from society altogether. (Winner 1989, 55)

    The Facebook Files reveal the type of world that Facebook is working tirelessly to build. It is a world where Facebook is even larger and even more powerful – a world in which Facebook sets the rules and regulations. In which Facebook says “trust us” and people are expected to obediently go along.

    Yes, Facebook needs content moderators, but it also seems that it is long-past due for there to be people who moderate Facebook. And those people should not be cogs in the Facebook machine.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Fromm, Erich. 2001. The Fear of Freedom. London: Routledge Classics.
    • Winner, Langdon. 1989. The Whale and the Reactor. Chicago: The University of Chicago Press.
  • Daniel Greene – Digital Dark Matters

    Daniel Greene – Digital Dark Matters

    a review of Simone Browne, Dark Matters: On the Surveillance of Blackness (Duke University Press, 2015)

    by Daniel Greene

    ~

    The Book of Negroes was the first census of black residents of North America. In it, the British military took down the names of some three thousand ex-slaves between April and November of 1783, alongside details of appearance and personality, destination and, if applicable, previous owner. The self-emancipated—some free, some indentured to English or German soldiers—were seeking passage to Canada or Europe, and lobbied the defeated British Loyalists fleeing New York City for their place in the Book. The Book of Negroes thus functioned as “the first government-issued document for state-regulated migration between the United States and Canada that explicitly linked corporeal markers to the right to travel” (67). An index of slave society in turmoil, its data fields were populated with careful gradations of labor power, denoting the value of black life within slave capitalism: “nearly worn out,” “healthy negress,” “stout labourer.”  Much of the data in The Book of Negroes was absorbed from so-called Birch Certificates, issued by a British Brigadier General of that name, which acted as passports certifying the freedom of ex-slaves and their right to travel abroad. The Certificates became evidence submitted by ex-slaves arguing for their inclusion in the Book of Negroes, and became sites of contention for those slave-owners looking to reclaim people they saw as property.

    If, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, “the Book of Negroes [was] a searchable database for the future tracking of those listed in it” (83), the details of preparing, editing, monitoring, sorting and circulating these data become direct matters of (black) life and death. Ex-slaves would fight for their legibility within the system through their use of Birch Certificates and the like; but they had often arrived in New York in the first place through a series of fights to remain illegible to the “many start-ups in slave-catching” that arose to do the work of distant slavers. Aliases, costumes, forged documents and the like were on the one hand used to remain invisible to the surveillance mechanisms geared towards capture, and on the other hand used to become visible to the surveillance mechanisms—like the Book—that could potentially offer freedom. Those ex-slaves who failed to appear as the right sort of data were effectively “put on a no-sail list” (68), and either held in New York City or re-rendered into property and delivered back to the slave-owner.

    Start-ups, passports, no-sail lists, databases: These may appear anachronistic at first, modern technological thinking out of sync with colonial America. But Browne deploys these labels with care and precision, like much else in this remarkable book. Dark Matters reframes our contemporary thinking about surveillance, and digital media more broadly, through a simple question with challenging answers: What if our mental map of the global surveillance apparatus began not with 9/11 but with the slave ship? Surveillance is considered here not as a specific technological development but a practice of tracking people and putting them into place. Browne demonstrates how certain people have long been imagined as out of place and that technologies of control and order were developed in order to diagnose, map, and correct these conditions: “Surveillance is nothing new to black folks. It is a fact of antiblackness” (10). That this ”fact” is often invisible even in our studies of surveillance and digital media more broadly speaks, perversely, to the power of white supremacy to structure our vision of the world. Browne’s apparent anachronisms make stranger the techniques of surveillance with which we are familiar, revealing the dark matter that has structured their use and development this whole time. Difficult to visualize, Browne shows us how to trace this dark matter through its effects: the ordering of people into place, and the escape from that order through “freedom acts” of obfuscation, sabotage, and trickery.

    This then is a book about new (and very old) methods of research in surveillance studies in particular, and digital studies in general, centered in black studies—particularly the work of critical theorists of race such as Saidiya Hartman and Sylvia Wynter who find in chattel slavery a prototypical modernity. More broadly, it is a book about new ways of engaging with our technocultural present, centered in the black diasporic experience of slavery and its afterlife. Frantz Fanon is a key figure throughout. Browne introduces us to her own approach through an early reflection on the revolutionary philosopher’s dying days in Washington, DC, overcome with paranoia over the very real surveillance to which he suspected he was subjected. Browne’s FOIA requests to the CIA regarding their tracking of Fanon during his time at the National Institutes of Health Clinical Center returned only a newspaper clipping, a book review, and a heavily redacted FBI memo reporting on Fanon’s travels. So she digs further into the archive, finding in Fanon’s lectures at the University of Tunis, delivered in the late 1950s after being expelled from Algeria by French colonial authorities, a critical exploration of policing and surveillance. Fanon’s psychiatric imagination, granting such visceral connection between white supremacist institutions and lived black experience in The Wretched of the Earth, here addresses the new techniques of ‘control by quantification’—punch clocks, time sheets, phone taps, and CCTV—in factories and department stores, and the alienation engendered in the surveilled.

    Browne’s recovery of this work grounds a creative extension of Fanon’s thinking into surveillance practices and surveillance studies. From his concept of “epidermalization”—“the imposition of race on the body” (7)—Browne builds a theory of racializing surveillance. Like many other key terms in Dark Matters, this names an empirical phenomenon—the crafting of racial boundaries through tracking and monitoring—and critiques the “absented presence” (13) of race in surveillance studies. Its opposition is found in dark sousveillance, a revision of Steve Mann’s term for watching the watchers that, again, describes both the freedom acts of black folks against a visual field saturated with racism, as well as an epistemology capable of perceiving, studying, and deconstructing apparatuses of racial surveillance.

    Each chapter of Dark Matters presents a different archive of racializing surveillance paired with reflections on black cultural production Browne reads as dark sousveillance. At each turn, Browne encourages us to see in slavery and its afterlife new modes of control, old ways of studying them, and potential paths of resistance. Her most direct critique of surveillance studies comes in Chapter 1’s precise exegesis of the key ideas that emerge from reading Jeremy Bentham’s plans for the Panopticon and Foucault’s study of it—the signal archive and theory of the field—against the plans for the slave ship Brookes. It turns out Bentham travelled on a ship transporting slaves during the trip where he sketched out the Panopticon, a model penitentiary wherein, through the clever use of lights, mirrors, and partitions, prisoners are totally isolated from one another and never sure whether they are being monitored or not. The archetype for modern power as self-discipline is thus nurtured, counter to its own telling, alongside sovereign violence. Browne’s reading of archives from the slave ship, the auction block, and the plantation reveal the careful biopolitics that created “blackness as a saleable commodity in the Western Hemisphere” (42). She asks how “the view from ‘under the hatches’” of Bentham’s Turkish ship, transporting, in his words, “18 young negresses (slaves),” might change our narrative about the emergence of disciplinary power and the modern management of life as a resource. It becomes clear that the power to instill self-governance through surveillance did not subordinate but rather partnered with the brutal spectacle of sovereign power that was intended to educate enslaved people on the limits of their humanity. This correction to the Foucauldian narrative is sorely necessary in a field, and a general political conversation about surveillance, that too often focuses on the technical novelty of drones, to give one example, without a connection to a generation learning to fear the skies.

    Stowage of the British slave ship Brookes under the regulated slave trade act of 1788
    “Stowage of the British slave ship Brookes under the regulated slave trade act of 1788.” Illustration. 1788. Library of Congress Rare Book and Special Collections Division Washington, D.C.

    These sorts of theoretical course corrections are among the most valuable lessons in Dark Matters. There is fastidious empirical work here, particularly in Chapter 2’s exploration of the Book of Negroes and colonial New York’s lantern laws requiring all black and indigenous people to bear lights after dark. But this empirical work is not the book’s focus, nor its main promise. That promise comes in prompting new empirical and political questions about how we see surveillance and what it means, and for whom, through an archaeology of black life under surveillance (indeed, Chapter 4, on airport surveillance, is the one I find weakest largely because it abandons this archaeological technique and focuses wholly on the present). Chapter 1’s reading of Charles William Tait’s prescriptions for slave management, for example, is part of a broader turn in the study of the history of capitalism where the roots of modern business practices like data-driven human resource management are traced to the supposedly pre-modern slave economy. Chapter 3’s assertion that slave branding “was a biometric technology…a measure of slavery’s making, marking, and marketing of the black subject as commodity” (91) does similar work, making strange the contemporary security technologies that purport the reveal racial truths which unwilling subjects do not give up. Facial recognition technologies and other biometrics are calibrated based on what Browne calls a “prototypical whiteness…privileged in enrollment, measurement, and recognition processes…reliant upon dark matter for its own meaning” (162). Particularly in the context of border control, these default settings reveal the calculations built into our security technologies regarding who “counts” enough to be recognized. Calculations grounded in an unceasing desire for new means with which to draw clear-cut racial boundaries.

    The point here is not that a direct line of technological development can be drawn from brands to facial recognition or from lanterns to ankle bracelets. Rather, if racism, as Ruth Wilson Gilmore argues, is “the state-sanctioned or extralegal production and exploitation of group-differentiated vulnerability to premature death,” then what Browne points to are methods of group differentiation, the means by which the value of black lives are calculated and how those calculations are stored, transmitted, and concretized in institutional life. If Browne’s cultural studies approach neglects a sustained empirical engagement with a particular mode of racializing surveillance—say, the uneven geography produced by the Fugitive Slave Act, mentioned in passing in relation to “start-ups in slave catching”—it is because she has taken on the unenviable task of shifting the focus of whole fields to dark matter previously ignored, opening a series of doors through which readers can glimpse the technologies that make race.

    Here then is a space cleared for surveillance studies, and digital studies more broadly, in an historical moment when so many are loudly proclaiming that Black Lives Matter, when the dark sousveillance of smartphone recordings has made the violence of institutional racism impossible to ignore. Work in digital studies has readily and repeatedly unearthed the capitalist imperatives built into our phones, feeds, and friends lists. Shoshanna Zuboff’s recent work on “surveillance capitalism” is perhaps a bellwether here: a rich theorization of the data accumulation imperative that transforms intra-capitalist competition, the nature of the contract, and the paths of everyday life. But her account of the growth of an extractive data economy that leads to a Big Other of behavior modification does not so far have a place for race.

    This is not a call on my part to sprinkle a missing ingredient atop a shoddy analysis in order to check a box. Zuboff is critiqued here precisely because she is among our most thoughtful, careful critics of contemporary capitalism. Rather, Browne’s account of surveillance capitalism—though she does not call it that—shows that race does not need to be introduced to the critical frame from outside. That dark matter has always been present, shaping what is visible even if it goes unseen itself. This manifests in at least two ways in Zuboff’s critique of the Big Other. First, her critique of Google’s accumulation of  “data exhaust” is framed primarily as a ‘pull’ of ever more sites and sensors into Google’s maw, passively given up users. But there is a great deal of “push” here as well. The accumulation of consumable data also occurs through the very human work of solving CAPTCHAs and scanning books. The latter is the subject of an iconic photo that shows the brown hand of a Google Books scanner—a low-wage subcontractor, index finger wrapped in plastic to avoid cuts from a day of page-turning—caught on a scanned page. Second, for Zuboff part of the frightening novelty of Google’s data extraction regime is its “formal indifference” to individual users, as well as to existing legal regimes that might impede the extraction of population-scale data. This, she argues, stands in marked contrast to the midcentury capitalist regimes which embraced a degree of democracy in order to prop up both political legitimacy and effective demand. But this was a democratic compromise limited in time and space. Extractive capitalist regimes of the past and present, including those producing the conflict minerals so necessary for hardware running Google services, have been marked by, at best, formal indifference in the North to conditions in the South. An analysis of surveillance capitalism’s struggle for hegemony would be greatly enriched by a consideration of how industrial capitalism legitimated itself in the metropole at the expense of the colony. Nor is this racial-economic dynamic and its political legitimation purely a cross-continental concern. US prisons have long extracted value from the incarcerated, racialized as second-class citizens. Today this practice continues, but surveillance technologies like ankle bracelets extend this extraction beyond prison walls, often at parolees’ expense.

    A Google Books scanner’s hand
    A Google Books scanner’s hand, caught working on WEB Du Bois’ The Souls of Black Folk. Via The Art of Google Books.

    Capitalism has always, as Browne’s notes on plantation surveillance make clear, been racial capitalism. Capital enters the world arrayed in the blood of primitive accumulation, and reproduces itself in part through the violent differentiation of labor powers. While the accumulation imperative has long been accepted as a value shaping media’s design and use, it is unfortunate that race has largely entered the frame of digital studies, and particularly, as Jessie Daniels argues, internet studies, through a study of either racial variables (e.g., “race” inheres to the body of the nonwhite person and causes other social phenomena) or racial identities (e.g., race is represented through minority cultural production, racism is produced through individual prejudice). There are perhaps good institutional reasons for this framing, owing to disciplinary training and the like, beyond the colorblind political ethic of much contemporary liberalism. But it has left us without digital stories of race (although there are certainly exceptions, particularly in the work of writers like Lisa Nakamura and her collaborators), perceived to be a niche concern, on par with our digital stories of capitalism—much less digital stories of racial capitalism.

    Browne provides a path forward for a study of race and technology more attuned to institutions and structures, to the long shadows old violence casts on our daily, digital lives. This slim, rich book is ultimately a reflection on method, on learning new ways to see. “Technology is made of people!” is where so many of our critiques end, discovering, once again, the values we build into machines. This is where Dark Matters begins. And it proceeds through slave ships, databases, branding irons, iris scanners, airports, and fingerprints to map the built project of racism and the work it takes to pass unnoticed in those halls or steal the map and draw something else entirely.

    _____

    Daniel Greene holds a PhD in American Studies from the University of Maryland. He is currently a Postdoctoral Researcher with the Social Media Collective at Microsoft Research, studying the future of work and the future of unemployment. He lives online at dmgreene.net.

    Back to the essay

  • Drones

    Drones

    8746586571_471353116d_bDavid Golumbia and David Simpson begin a conversation, inviting comment below or via email to boundary 2:

    What are we talking about when we talk about drones? Is it that they carry weapons (true of only a small fraction of UAVs), that they have remote, mobile surveillance capabilities (true of most UAVs, but also of many devices not currently thought of as drones), or that they have or may someday have forms of operational autonomy (a goal of many forms of robotics research)? Is it the technology itself, or the fact that it is currently being deployed largely by the world’s dominant powers, or the way it is being deployed? Is it the use of drones in specific military contexts, or the existence of those military conflicts per se (that is, if we endorsed a particular conflict, would the use of drones in that scenario be acceptable)? Is it that military use of drones leads to civilian casualties, despite the fact that other military tactics almost certainly lead to many more casualties (the total number of all persons, combatant and non-combatant, killed by drones to date by US operations worldwide is estimated at under 4000; the number of civilian casualties in the Iraq conflict alone even by conservative estimates exceeds 100,000 and may be as many as 500,000 or even more), a reduction in total casualties that forms part of the arguments used by some military and international law analysts to suggest that drone use is not merely acceptable but actually required under international law, which mandates that militaries use the least amount of lethal force available to them that will effectively achieve their goals? If we object to drones based on their use in targeted killings, do we accept their use for surveillance? If we object only to their use in targeted killing, does that objection proceed from the fact that drones fly, or do we actually object to all forms of automated or partly-automated lethal force, along the lines of the Stop Killer Robots campaign, whose scope goes well beyond drones, and yet does not include non-lethal drones? How do we define drones so as to capture what is objectionable about them on humanitarian and civil society grounds, given how rapidly the technology is advancing and how difficult it already is to distinguish some drones from other forms of technology, especially for surveillance? What do we do about the proliferating “positive” use cases for drones (journalism, remote information about forest fires and other environmental problems, for example), which are clearly being developed in part so as to sell drone technology in general to the public, but at least in some cases appear to describe vital functions that other technology cannot fulfill?

    David Golumbia

    _____

    What resources can we call upon, invent or reinvent in order to bring effective critical attention to the phenomenon of drone warfare? Can we revivify the functions of witness and testimony to protest or to curtail the spread of robotic lethal violence? What alliances can be pursued with the radical journalism sector (Medea Benjamin, Jeremy Scahill)? Is drone warfare inevitably implicated in a seamlessly continuous surveillance culture wherein all information is or can be weaponized? A predictable development in the command-control-communication-intelligence syndrome articulated some time ago by Donna Haraway? Can we hope to devise any enforceable boundaries between positive and destructive uses of the technology? Does it bring with it a specific aesthetics, whether for those piloting the drones or those on the receiving end? What is the profile of psychological effects (disorders?) among those observing and then killing at a distance? And what are the political obligations of a Congress and a Presidency able to turn to drone technology as arguably the most efficient form yet devised for deploying state terrorism? What are the ethical obligations of a superpower (or indeed a local power) that can now wage war with absolutely no risk to its own combatants?

    David Simpson

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay