• Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    Zachary Loeb — Flamethrowers and Fire Extinguishers (Review of Jeff Orlowski, dir., The Social Dilemma)

    a review of Jeff Orlowski, dir., The Social Dilemma (Netflix/Exposure Labs/Argent Pictures, 2020)

    by Zachary Loeb

    ~

    The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!

    – Joseph Weizenbaum (1976)

    Why did you last look at your smartphone? Did you need to check the time? Was picking it up a conscious decision driven by the need to do something very particular, or were you just bored? Did you turn to your phone because its buzzing and ringing prompted you to pay attention to it? Regardless of the particular reasons, do you sometimes find yourself thinking that you are staring at your phone (or other computerized screens) more often than you truly want? And do you ever feel, even if you dare not speak this suspicion aloud, that your gadgets are manipulating you?

    The good news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. The bad news is that you aren’t just being paranoid, your gadgets were designed in such a way as to keep you constantly engaging with them. What’s more, on the bad news front, these devices (and the platforms they run) are constantly sucking up information on you and are now pushing and prodding you down particular paths. Furthermore, alas more bad news, these gadgets and platforms are not only wreaking havoc on your attention span they are also undermining the stability of your society. Nevertheless, even though there is ample cause to worry, the new film The Social Dilemma ultimately has good news for you: a collection of former tech-insiders is starting to speak out! Sure, many of these individuals are the exact people responsible for building the platforms that are currently causing so much havoc—but they meant well, they’re very sorry, and (did you hear?) they meant well.

    Directed by Jeff Orlowski, and released to Netflix in early September 2020, The Social Dilemma is a docudrama that claims to provide a unsparing portrait of what social media platforms have wrought. While the film is made up of a hodgepodge of elements, at the core of the work are a series of interviews with Silicon Valley alumni who are concerned with the direction in which their former companies are pushing the world. Most notable amongst these, the film’s central character to the extent it has one, is Tristan Harris (formerly a design ethicists at Google, and one of the cofounders of The Center for Humane Technology) who is not only repeatedly interviewed but is also shown testifying before the Senate and delivering a TED style address to a room filled with tech luminaries. This cast of remorseful insiders is bolstered by a smattering of academics, and non-profit leaders, who provide some additional context and theoretical heft to the insiders’ recollections. And beyond these interviews the film incorporates a fictional quasi-narrative element depicting the members of a family (particularly its three teenage children) as they navigate their Internet addled world—with this narrative providing the film an opportunity to strikingly dramatize how social media “works.”

    The Social Dilemma makes some important points about the way that social media works, and the insiders interviewed in the film bring a noteworthy perspective. Yet beyond the sad eyes, disturbing animations, and ominous music The Social Dilemma is a piece of manipulative filmmaking on par with the social media platforms it critiques. While presenting itself as a clear-eyed expose of Silicon Valley, the film is ultimately a redemption tour for a gaggle of supposedly reformed techies wrapped in an account that is so desperate to appeal to “both sides” that it is unwilling to speak hard truths.

    The film warns that the social media companies are not your friends, and that is certainly true, but The Social Dilemma is not your friend either.

    The Social Dilemma

    As the film begins the insiders introduce themselves, naming the companies where they had worked, and identifying some of the particular elements (such as the “like” button) with which they were involved. Their introductions are peppered with expressions of concern intermingled with earnest comments about how “Nobody, I deeply believe, ever intended any of these consequences,” and that “There’s no one bad guy.” As the film transitions to Tristan Harris rehearsing for the talk that will feature later in the film, he comments that “there’s a problem happening in the tech industry, and it doesn’t have a name.” After recounting his personal awakening, whilst working at Google, and his attempt to spark a serious debate about these issues with his coworkers, the film finds “a name” for the “problem” Harris had alluded to: “surveillance capitalism.” The thinker who coined that term, Shoshana Zuboff, appears to discuss this concept which captures the way in which Silicon Valley thrives not off of users’ labor but off of every detail that can be sucked up about those users and then sold off to advertisers.

    After being named, “surveillance capitalism” hovers in the explanatory background as the film considers how social media companies constantly pursue three goals: engagement (to keep you coming back), growth (to get you to bring in more users), and advertising (to get better at putting the right ad in front of your eyes, which is how the platforms make money). The algorithms behind these platforms are constantly being tweaked through A/B testing, with every small improvement being focused around keeping users more engaged. Numerous problems emerge: designed to be addictive, these platforms and devices claw at users’ attention; teenagers (especially young ones) struggle as their sense of self-worth becomes tied to “likes;” misinformation spreads rapidly in an information ecosystem wherein the incendiary gets more attention than the true; and the slow processes of democracy struggle to keep up with the speed of technology. Though the concerns are grave, and the interviewees are clearly concerned, the tonality is still one of hopefulness; the problem here is not really social media, but “surveillance capitalism,” and if “surveillance capitalism” can be thwarted then the true potential of social media can be attained. And the people leading that charge against “surveillance capitalism”? Why, none other than the reformed insiders in the film.

    While the bulk of the film consists of interviews, and news clips, the film is periodically interrupted by a narrative in which a family with three teenage children is shown. The Mother (Barbara Gehring) and Step-Father (Chris Grundy) are concerned with their children’s social media usage, even as they are glued to their own devices. As for the children: the oldest Cassandra (Kara Hayward) is presented as skeptical towards social media, the youngest Isla (Sophia Hammons) Is eager for online popularity, and the middle child Ben (Skyler Gisondo) eventually falls down the rabbit hole of recommended conspiratorial content. As the insiders, and academics, talk about the various dangers of social media the film shifts to the narrative to dramatize these moments – thus a discussion of social media’s impact on young teenagers, particularly girls, cuts to Isla being distraught after an insulting comment is added to one of the images she uploads. Cassandra (that name choice can’t be a coincidence) is presented as most in line with the general message of the film and the character refers to Jaron Lanier as a “genius” and in another sequence is shown reading Zuboff’s The Age of Surveillance Capitalism. Yet the member of the family the film dwells on the most is almost certainly Ben. For the purposes of dramatizing how an algorithm works, the film repeatedly returns to a creepy depiction of the Advertising, Engagement, and Growth Ais (all played by Vincent Kartheiser) as they scheme to get Ben to stay glued to his phone. Beyond the screens, the world in the narrative is being rocked by a strange protest movement calling itself “The Extreme Center” – whose argument seems to be that both sides can’t be trusted – and Ben eventually gets wrapped up in their message. The family’s narrative concludes with Ben and Cassandra getting arrested at a raucous rally held by “The Extreme Center,” sitting handcuffed on the ground and wondering how it is that this could have happened.

    To the extent that The Social Dilemma builds towards a conclusion, it is the speech that Harris gives (before an audience that includes many of the other interviewees in the film). And in that speech, and the other comments made around it, the point that is emphasized is that Silicon Valley must get away from “surveillance capitalism.” It must embrace “humane technology” that seeks to empower users not entangle them. Emphasizing that, despite how things have turned out, that “I don’t think these guys set out to be evil” the various insiders double-down on their belief in high-tech’s liberatory potential. Contrasting rather unflattering imagery of Mark Zuckerberg (without genuinely calling him out) testifying with images of Steve Jobs in his iconic turtleneck, the film claims “the idea of humane technology, that’s where Silicon Valley got its start.” And before the credits roll, Harris seems to speak for his fellow insiders as he notes “we built these things, and we have a responsibility to change it.” For those who found the film unsettling, and who are confused by exactly what they are meant to do if they are not part of Harris’s “we,” the film offers some straightforward advice. Drawing on their own digital habits, the insiders recommend: turning off notifications, never watching a recommended video, opting for a less-invasive search engine, trying to escape your content bubble, keeping your devices out of your bedroom, and being a critical consumer of information.

    It is a disturbing film, and it is constructed so as to unsettle the viewer, but it still ends on a hopeful note: reform is possible, and the people in this film are leading that charge. The problem is not social media as such, but what the ways in which “surveillance capitalism” has thwarted what social media could really be. If, after watching The Social Dilemma, you feel concerned about what “surveillance capitalism” has done to social media (and you feel prepared to make some tweaks in your social media use) but ultimately trust that Silicon Valley insiders are on the case—then the film has succeeded in its mission. After all, the film may be telling you to turn off Facebook notifications, but it doesn’t recommend deleting your account.

    Yet one of the points the film makes is that you should not accept the information that social media presents to you at face value. And in the same spirit, you should not accept the comments made by oh-so-remorseful Silicon Valley insiders at face value either. To be absolutely clear: we should be concerned about the impacts of social media, we need to work to rein in the power of these tech companies, we need to be willing to have the difficult discussion about what kind of society we want to live in…but we should not believe that the people who got us into this mess—who lacked the foresight to see the possible downsides in what they were building—will get us out of this mess. If these insiders genuinely did not see the possible downsides of what they were building, than they are fools who should not be trusted. And if these insiders did see the possible downsides, continued building these things anyways, and are now pretending that they did not see the downsides, than they are liars who definitely should not be trusted.

    It’s true, arsonists know a lot about setting fires, and a reformed arsonist might be able to give you some useful fire safety tips—but they are still arsonists.

    There is much to be said about The Social Dilemma. Indeed, anyone who cares about these issues (unfortunately) needs to engage with The Social Dilemma if for no other reason than the fact that this film will be widely watched, and will thus set much of the ground on which these discussions take place. Therefore, it is important to dissect certain elements of the film. To be clear, there is a lot to explore in The Social Dilemma—a book or journal issue could easily be published in which the docudrama is cut into five minute segments with academics and activists being each assigned one segment to comment on. While there is not the space here to offer a frame by frame analysis of the entire film, there are nevertheless a few key segments in the film which deserve to be considered. Especially because these key moments capture many of the film’s larger problems.

    “when bicycles showed up”

    A moment in The Social Dilemma that perfectly, if unintentionally, sums up many of the major flaws with the film occurs when Tristan Harris opines on the history of bicycles. There are several problems in these comments, but taken together these lines provide you with almost everything you need to know about the film. As Harris puts it:

    No one got upset when bicycles showed up. Right? Like, if everyone’s starting to go around on bicycles, no one said, ‘Oh, my God, we’ve just ruined society. [chuckles] Like, bicycles are affecting people. They’re pulling people away from their kids. They’re ruining the fabric of democracy. People can’t tell what’s true.’ Like we never said any of that stuff about a bicycle.

    Here’s the problem, Harris’s comments about bicycles are wrong.

    They are simply historically inaccurate. Some basic research into the history of bicycles that looks at the ways that people reacted when they were introduced would reveal that many people were in fact quite “upset when bicycles showed up.” People absolutely were concerned that bicycles were “affecting people,” and there were certainly some who were anxious about what these new technologies meant for “the fabric of democracy.” Granted, that there were such adverse reactions to the introduction of bicycles should not be seen as particularly surprising, because even a fairly surface-level reading of the history of technology reveals that when new technologies are introduced they tend to be met not only with excitement, but also with dread.

    Yet, what makes Harris’s point so interesting is not just that he is wrong, but that he is so confident while being so wrong. Smiling before the camera, in what is obviously supposed to be a humorous moment, Harris makes a point about bicycles that is surely one that will stick with many viewers—and what he is really revealing is that he needs to take some history classes (or at least do some reading). It is genuinely rather remarkable that this sequence made it into the final cut of the film. This was clearly an expensive production, but they couldn’t have hired a graduate student to watch the film and point out “hey, you should really cut this part about bicycles, it’s wrong”? It is hard to put much stock in Harris, and friends, as emissaries of technological truth when they can’t be bothered to do basic research.

    That Harris speaks so assuredly about something which he is so wrong about gets at one of the central problems with the reformed insiders of The Social Dilemma. Though these are clearly intelligent people (lots of emphasis is placed on the fancy schools they attended), they know considerably less than they would like the viewers to believe. Of course, one of the ways that they get around this is by confidently pretending they know what they’re talking about, which manifests itself by making grandiose claims about things like bicycles that just don’t hold up. The point is not to mock Harris for this mistake (though it really is extraordinary that the segment did not get cut), but to make the following point: if Harris, and his friends, had known a bit more about the history of technology, and perhaps if they had a bit more humility about what they don’t know, perhaps they would not have gotten all of us into this mess.

    A point that is made by many of the former insiders interviewed for the film is that they didn’t know what the impacts would be. Over and over again we hear some variation of “we meant well” or “we really thought we were doing something great.” It is easy to take such comments as expressions of remorse, but it is more important to see such comments as confessions of that dangerous mixture of hubris and historical/social ignorance that is so common in Silicon Valley. Or, to put it slightly differently, these insiders really needed to take some more courses in the humanities. You know how you could have known that technologies often have unforeseen consequences? Study the history of technology. You know how you could have known that new media technologies have jarring political implications? Read some scholarship from media studies. A point that comes up over and over again in such scholarly work, particularly works that focus on the American context, is that optimism and enthusiasm for new technology often keeps people (including inventors) from seeing the fairly obvious risks—and all of these woebegone insiders could have known that…if they had only been willing to do the reading. Alas, as anyone who has spent time in a classroom knows, a time honored way of covering up for the fact that you haven’t done the reading is just to speak very confidently and hope that your confidence will successfully distract from the fact that you didn’t do the reading.

    It would be an exaggeration to claim “all of these problems could have been prevented if these people had just studied history!” And yet, these insiders (and society at large) would likely be better able to make sense of these various technological problems if more people had an understanding of that history. At the very least, such historical knowledge can provide warnings about how societies often struggle to adjust to new technologies, can teach how technological progress and social progress are not synonymous, can demonstrate how technologies have a nasty habit of biting back, and can make clear the many ways in which the initial liberatory hopes that are attached to a technology tend to fade as it becomes clear that the new technology has largely reinscribed a fairly conservative status quo.

    At the very least, knowing a bit more about the history of technology can keep you from embarrassing yourself by confidently making claiming that “we never said any of that stuff about a bicycle.”

    “to destabilize”

    While The Social Dilemma expresses concern over how digital technologies impact a person’s body, the film is even more concerned about the way these technologies impact the body politic. A worry that is captured by Harris’s comment that:

    We in the tech industry have created the tools to destabilize and erode the fabric of society.

    That’s quite the damning claim, even if it is one of the claims in the film that probably isn’t all that controversial these days. Though many of the insiders in the film pine nostalgically for those idyllic days from ten years ago when much of the media and the public looked so warmly towards Silicon Valley, this film is being released at a moment when much of that enthusiasm has soured. One of the odd things about The Social Dilemma is that politics are simultaneously all over the film, and yet politics in the film are very slippery. When the film warns of looming authoritarianism: Bolsanaro gets some screen time, Putin gets some ominous screen time—but though Trump looms in the background of the film he’s pretty much unseen and unnamed. And when US politicians do make appearances we get Marco Rubio and Jeff Flake talking about how people have become too polarized and Jon Tester reacting with discomfort to Harris’s testimony. Of course, in the clip that is shown, Rubio speaks some pleasant platitudes about the virtues of coming together…but what does his voting record look like?

    The treatment of politics in The Social Dilemma comes across most clearly in the narrative segment, wherein much attention is paid to a group that calls itself “The Extreme Center.” Though the ideology of this group is never made quite clear, it seems to be a conspiratorial group that takes as its position that “both sides are corrupt” – rejecting left and right it therefore places itself in “the extreme center.” It is into this group, and the political rabbit hole of its content, that Ben falls in the narrative – and the raucous rally (that ends in arrests) in the narrative segment is one put on by the “extreme center.” It may appear that “the extreme center” is just a simple storytelling technique, but more than anything else it feels like the creation of this fictional protest movement is really just a way for the film to get around actually having to deal with real world politics.

    The film includes clips from a number of protests (though it does not bother to explain who these people are and why they are protesting), and there are some moments when various people can be heard specifically criticizing Democrats or Republicans. But even as the film warns of “the rabbit hole” it doesn’t really spend much time on examples. Heck, the first time that the words “surveillance capitalism” get spoken in the film are in a clip of Tucker Carlson. Some points are made about “pizzagate” but the documentary avoids commenting on the rapidly spreading QAnon conspiracy theory. And to the extent that any specific conspiracy receives significant attention it is the “flat earth” conspiracy. Granted, it’s pretty easy to deride the flat earthers, and in focusing on them the film makes a very conscious decision to not focus on white supremacist content and QAnon. Ben falls down the “extreme center” rabbit hole, and it may well be that the reason why the filmmakers have him fall down this fictional rabbit hole is so that they don’t have to talk about the likelihood that (in the real world) he would likely fall down a far-right rabbit hole. But The Social Dilemma doesn’t want to make that point, after all, in the political vision it puts forth the problem is that there is too much polarization and extremism on both sides.

    The Social Dilemma clearly wants to avoid taking sides. And in so doing demonstrates the ways in which Silicon Valley has taken sides. After all, to focus so heavily on polarization and the extremism of “both sides” just serves to create a false equivalency where none exists. But, the view that “the Trump administration has mismanaged the pandemic” and the view that “the pandemic is a hoax” – are not equivalent. The view that “climate change is real” and “climate change is a hoax” – are not equivalent. People organizing for racial justice and people organizing because they believe that Democrats are satanic cannibal pedophiles – are not equivalent. The view that “there is too much money in politics” and the view that “the Jews are pulling the strings” – are not equivalent. Of course, to say that these things “are not equivalent” is to make a political judgment, but by refusing to make such a judgment The Social Dilemma presents both sides as being equivalent. There are people online who are organizing for the cause of racial justice, and there are white-supremacists organizing online who are trying to start a race war—those causes may look the same to an algorithm, and they may look the same to the people who created those algorithms, but they are not the same.

    You cannot address the fact that Facebook and YouTube have become hubs of violent xenophobic conspiratorial content unless you are willing to recognize that Facebook and YouTube actively push violent xenophobic conspiratorial content.

    It is certainly true that there are activist movements from the left and the right organizing online at the moment, but when you watch a movie trailer on YouTube the next recommended video isn’t going to be a talk by Angela Davis.

    “it’s the critics”

    Much of the content of The Social Dilemma is unsettling, and the film makes it clear that change is necessary. Nevertheless, the film ends on a positive note. Pivoting away from gloominess, the film shows the rapt audience nodding as Harris speaks of the need for “humane technology,” and this assembled cast of reformed insiders is presented as proof that Silicon Valley is waking up to the need to take responsibility. Near the film’s end, Jaron Lanier hopefully comments that:

    it’s the critics that drive improvement. It’s the critics who are the true optimists.

    Thus, the sense that is conveyed at the film’s close is that despite the various worries that had been expressed—the critics are working on it, and the critics are feeling good.

    But, who are the critics?

    The people interviewed in the film, obviously.

    And that is precisely the problem. “Critic” is something of a challenging term to wrestle with as it doesn’t necessarily take much to be able to call yourself, or someone else, a critic. Thus, the various insiders who are interviewed in the film can all be held up as “critics” and can all claim to be “critics” thanks to the simple fact that they’re willing to say some critical things about Silicon Valley and social media. But what is the real content of the criticisms being made? Some critics are going to be more critical than others, so how critical are these critics? Not very.

    The Social Dilemma is a redemption tour that allows a bunch of remorseful Silicon Valley insiders to rebrand themselves as critics. Based on the information provided in the film it seems fairly obvious that a lot of these individuals are responsible for causing a great deal of suffering and destruction, but the film does not argue that these men (and they are almost entirely men) should be held accountable for their deeds. The insiders have harsh things to say about algorithms, they too have been buffeted about by nonstop nudging, they are also concerned about the rabbit hole, they are outraged at how “surveillance capitalism” has warped technological possibilities—but remember, they meant well, and they are very sorry.

    One of the fascinating things about The Social Dilemma is that in one scene a person will proudly note that they are responsible for creating a certain thing, and then in the next scene they will say that nobody is really to blame for that thing. Certainly not them, they thought they were making something great! The insiders simultaneously want to enjoy the cultural clout and authority that comes from being the one who created the like button, while also wanting to escape any accountability for being the person who created the like button. They are willing to be critical of Silicon Valley, they are willing to be critical of the tools they created, but when it comes to their own culpability they are desperate to hide behind a shield of “I meant well.” The insiders do a good job of saying remorseful words, and the camera catches them looking appropriately pensive, but it’s no surprise that these “critics” should feel optimistic, they’ve made fortunes utterly screwing up society, and they’ve done such a great job of getting away with it that now they’re getting to elevate themselves once again by rebranding themselves as “critics.”

    To be a critic of technology, to be a social critic more broadly, is rarely a particularly enjoyable or a particularly profitable undertaking. Most of the time, if you say anything critical about technology you are mocked as a Luddite, laughed at as a “prophet of doom,” derided as a technophobe, accused of wanting everybody to go live in caves, and banished from the public discourse. That is the history of many of the twentieth century’s notable social critics who raised the alarm about the dangers of computers decades before most of the insiders in The Social Dilemma were born. Indeed, if you’re looking for a thorough retort to The Social Dilemma you cannot really do better than reading Joseph Weizenbaum’s Computer Power and Human Reason—a book which came out in 1976. That a film like The Social Dilemma is being made may be a testament to some shifting attitudes towards certain types of technology, but it was not that long ago that if you dared suggest that Facebook was a problem you were denounced as an enemy of progress.

    There are many phenomenal critics speaking out about technology these days. To name only a few: Safiya Noble has written at length about the ways that the algorithms built by companies like Google and Facebook reinforce racism and sexism; Virginia Eubanks has exposed the ways in which high-tech tools of surveillance and control are first deployed against society’s most vulnerable members; Wendy Hui Kyong Chun has explored how our usage of social media becomes habitual; Jen Schradie has shown the ways in which, despite the hype to the contrary, online activism tends to favor right-wing activists and causes; Sarah Roberts has pulled back the screen on content moderation to show how much of the work supposedly being done by AI is really being done by overworked and under-supported laborers; Ruha Benjamin has made clear the ways in which discriminatory designs get embedded in and reified by technical systems; Christina Dunbar-Hester has investigated the ways in which communities oriented around technology fail to overcome issues of inequality; Sasha Costanza-Chock has highlighted the need for an approach to design that treats challenging structural inequalities as the core objective, not an afterthought; Morgan Ames expounds upon the “charisma” that develops around certain technologies; and Meredith Broussard has brilliantly inveighed against the sort of “technochauvinist” thinking—the belief that technology is the solution to every problem—that is so clearly visible in The Social Dilemma. To be clear, this list of critics is far from all-inclusive. There are numerous other scholars who certainly could have had their names added here, and there are many past critics who deserve to be named for their disturbing prescience.

    But you won’t hear from any of those contemporary critics in The Social Dilemma. Instead, viewers of the documentary are provided with a steady set of mostly male, mostly white, reformed insiders who were unable to predict that the high-tech toys they built might wind up having negative implications.

    It is not only that The Social Dilemma ignores most of the figures who truly deserve to be seen as critics, but that by doing so what The Social Dilemma does is set the boundaries for who gets to be a critic and what that criticism can look like. The world of criticism that The Social Dilemma sets up is one wherein a person achieves legitimacy as a critic of technology as a result of having once been a tech insider. Thus what the film does is lay out, and then set about policing the borders of, what can pass for acceptable criticism of technology. This not only limits the cast of critics to a narrow slice of mostly white mostly male insiders, it also limits what can be put forth as a solution. You can rest assured that the former insiders are not going to advocate for a response that would involve holding the people who build these tools accountable for what they’ve created. On the one hand it’s remarkable that no one in the film really goes after Mark Zuckerberg, but many of these insiders can’t go after Zuckerberg—because any vitriol they direct at him could just as easily be directed at them as well.

    It matters who gets to be deemed a legitimate critic. When news networks are looking to have a critic on it matters whether they call Tristan Harris or one of the previously mentioned thinkers, when Facebook does something else horrendous it matters whether a newspaper seeks out someone whose own self-image is bound up in the idea that the company means well or someone who is willing to say that Facebook is itself the problem. When there are dangerous fires blazing everywhere it matters whether the voices that get heard are apologetic arsonists or firefighters.

    Near the film’s end, while the credits play, as Jaron Lanier speaks of Silicon Valley he notes “I don’t hate them. I don’t wanna do any harm to Google or Facebook. I just want to reform them so they don’t destroy the world. You know?” And these comments capture the core ideology of The Social Dilemma, that Google and Facebook can be reformed, and that the people who can reform them are the people who built them.

    But considering all of the tangible harm that Google and Facebook have done, it is far past time to say that it isn’t enough to “reform” them. We need to stop them.

    Conclusion: On “Humane Technology”

    The Social Dilemma is an easy film to criticize. After all, it’s a highly manipulative piece of film making, filled with overly simplified claims, historical inaccuracies, conviction lacking politics, and a cast of remorseful insiders who still believe Silicon Valley’s basic mythology. The film is designed to scare you, but it then works to direct that fear into a few banal personal lifestyle tweaks, while convincing you that Silicon Valley really does mean well. It is important to view The Social Dilemma not as a genuine warning, or as a push for a genuine solution, but as part of a desperate move by Silicon Valley to rehabilitate itself so that any push for reform and regulation can be captured and defanged by “critics” of its own choosing.

    Yet, it is too simple (even if it is accurate) to portray The Social Dilemma as an attempt by Silicon Valley to control both the sale of flamethrowers and fire extinguishers. Because such a focus keeps our attention pinned to Silicon Valley. It is easy to criticize Silicon Valley, and Silicon Valley definitely needs to be criticized—but the bright-eyed faith in high-tech gadgets and platforms that these reformed insiders still cling to is not shared only by them. The people in this film blame “surveillance capitalism” for warping the liberatory potential of Internet connected technologies, and many people would respond to this by pushing back on Zuboff’s neologism to point out that “surveillance capitalism” is really just “capitalism” and that therefore the problem is really that capitalism is warping the liberatory potential of Internet connected technologies. Yes, we certainly need to have a conversation about what to do with Facebook and Google (dismantle them). But at a certain point we also need to recognize that the problem is deeper than Facebook and Google, at a certain point we need to be willlng to talk about computers.

    The question that occupied many past critics of technology was the matter of what kinds of technology do we really need? And they were clear that this was a question that was far too important to be left to machine-worshippers.

    The Social Dilemma responds to the question of “what kind of technology do we really need?” by saying “humane technology.” After all, the organization The Center for Humane Technology is at the core of the film, and Harris speaks repeatedly of “humane technology.” At the surface level it is hard to imagine anyone saying that they disapprove of the idea of “humane technology,” but what the film means by this (and what the organization means by this) is fairly vacuous. When the Center for Humane Technology launched in 2018, to a decent amount of praise and fanfare, it was clear from the outset that its goal had more to do with rehabilitating Silicon Valley’s image than truly pushing for a significant shift in technological forms. Insofar as “humane technology” means anything, it stands for platforms and devices that are designed to be a little less intrusive, that are designed to try to help you be your best self (whatever that means), that try to inform you instead of misinform you, and that make it so that you can think nice thoughts about the people who designed these products. The purpose of “humane technology” isn’t to stop you from being “the product,” it’s to make sure that you’re a happy product. “Humane technology” isn’t about deleting Facebook, it’s about renewing your faith in Facebook so that you keep clicking on the “like” button. And, of course, “humane technology” doesn’t seem to be particularly concerned with all of the inhumanity that goes into making these gadgets possible (from mining, to conditions in assembly plants, to e-waste). “Humane technology” isn’t about getting Ben or Isla off their phones, it’s about making them feel happy when they click on them instead of anxious. In a world of empowered arsonists, “humane technology” seeks to give everyone a pair of asbestos socks.

    Many past critics also argued that what was needed was to place a new word before technology – they argued for “democratic” technologies, or “holistic” technologies, or “convivial” technologies, or “appropriate” technologies, and this list could go on. Yet at the core of those critiques was not an attempt to salvage the status quo but a recognition that what was necessary in order to obtain a different sort of technology was to have a different sort of society. Or, to put it another way, the matter at hand is not to ask “what kind of computers do we want?” but to ask “what kind of society do we want?” and to then have the bravery to ask how (or if) computers really fit into that world—and if they do fit, how ubiquitous they will be, and who will be responsible for the mining/assembling/disposing that are part of those devices’ lifecycles. Certainly, these are not easy questions to ask, and they are not pleasant questions to mull over, which is why it is so tempting to just trust that the Center for Humane Technology will fix everything, or to just say that the problem is Silicon Valley.

    Thus as the film ends we are left squirming unhappily as Netflix (which has, of course, noted the fact that we watched The Social Dilemma) asks us to give the film a thumbs up or a thumbs down – before it begins auto-playing something else.

    The Social Dilemma is right in at least one regard, we are facing a social dilemma. But as far as the film is concerned, your role in resolving this dilemma is to sit patiently on the couch and stare at the screen until a remorseful tech insider tells you what to do.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. New York: WH Freeman & Co.
  • Moira Weigel — Palantir Goes to the Frankfurt School

    Moira Weigel — Palantir Goes to the Frankfurt School

    Moira Weigel

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Since the election of Donald Trump, a growing body of research has examined the role of digital technologies in new right wing movements (Lewis 2018; Hawley 2017; Neiwert 2017; Nagle 2017). This article will explore a distinct, but related, subject: new right wing tendencies within the tech industry itself. Our point of entry will be an improbable document: a German language dissertation submitted by an American to the faculty of social sciences at J. W. Goethe University of Frankfurt in 2002. Entitled Aggression in the Life-World, the dissertation aims to describe the role that aggression plays in social integration, or the set of processes that lead individuals in a given society to feel bound to one another. To that end, it offers a “systematic” reinterpretation of Theodor Adorno’s Jargon of Authenticity (1973). It is of interest primarily because of its author: Alexander C. Karp.[1]

    Karp, as some readers may know, did not pursue a career in academia. Instead, he became the CEO of the powerful and secretive data analytics company, Palantir Technologies. His dissertation has inspired speculation for years, but no journalist or scholar has yet analyzed it. Doing so, I will argue that it offers insight into the intellectual formation of an influential network of actors in and around Silicon Valley, a network articulating ideas and developing business practices that challenge longstanding beliefs about how Silicon Valley thinks and works.

    For decades, a view prevailed that the politics of both digital technologies and most digital technologists were liberal, or neoliberal, depending on how critically the author in question saw them. Liberalism and neoliberalism are complex and contested concepts. But broadly speaking, digital networks have been seen as embodying liberal or neoliberal logics insofar as they treated individuals as abstractly equal, rendering social aspects of embodiment like race and gender irrelevant, and allowing users to engage directly in free expression and free market competition (Kolko and Nakamura, 2000; Chun 2005, 2011, 2016). The ascendance of the Bay Area tech industry over competitors in Boston or in Europe was explained as a result of its early adoption of new forms of industrial organization, built on flexible, short-term contracts and a strong emotional identification between workers and their jobs (Hayes 1989; Saxenian 1994).

    Technologists themselves were said to embrace a new set of values that the British media theorists Richard Barbrook and Andy Cameron dubbed the “Californian Ideology.” This “anti-statist gospel of cybernetic libertarianism… promiscuously combine[d] the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies,” they wrote; it answered the challenge posed by the social liberalism of the New Left by “resurrecting economic liberalism” (1996, 42 & 47). Fred Turner attributed this synthesis to the “New Communalists,” members of the counterculture who “turn[ed] away from questions of gender, race, and class, and toward a rhetoric of individual and small group empowerment” (2006, 97). Nonetheless, he reinforced the broad outlines that Barbrook and Cameron had sketched. Turner further showed that midcentury critiques of mass media, and their alleged tendency to produce authoritarian subjects, inspired faith that digital media could offer salutary alternatives—that “democratic surrounds” would sustain democracy by facilitating the self-formation of democratic subjects (2013). 

    Silicon Valley has long supported Democratic Party candidates in national politics and many tech CEOs still subscribe to the “hybrid” values of the Californian Ideology (Brookman et al. 2019). However, in recent years, tensions and contradictions within Silicon Valley liberalism, particularly between commitments to social and economic liberalism, have become more pronounced. In the wake of the 2016 presidential election, several software engineers emerged as prominent figures on the “alt-right,” and newly visible white nationalist media entrepreneurs reported that they were drawing large audiences from within the tech industry.[2] The leaking of information from internal meetings at Google to digital outlets like Breitbart and Vox Popoli suggests that there was at least some truth to their claims (Tiku 2018). Individual engineers from Google, YouTube, and Facebook have received national media attention after publicly criticizing the liberal culture of their (former) workplaces and in some cases filing lawsuits against them.[3] And Republican politicians, including Trump (2019a, 2019b), have cited these figures as evidence of “liberal bias” at tech firms and the need for stronger government regulation (Trump 2019a; Kantrowitz 2019).

    Karp’s Palantir cofounder (and erstwhile roommate) Peter Thiel looms large in an emerging constellation of technologists, investors, and politicians challenging what they describe as hegemonic social liberalism in Silicon Valley. Thiel has been assembling a network of influential “contrarians” since he founded the Stanford Review as an undergraduate in the late 1980s (Granato 2017). In 2016, Thiel became a highly visible supporter of Donald Trump, speaking at the Republican National Convention, donating $1.25 million in the final weeks of Trump’s campaign for president (Streitfeld 2016a), and serving as his “tech liaison” during the transition period (Streitfeld 2016b). (Earlier in the campaign, Thiel had donated $1 million to the Defeat Crooked Hillary Super PAC backed by Robert Mercer, and overseen by Steve Bannon and Kellyanne Conway; see Green 2017, 200.) Since 2016, he has met with prominent figures associated with the alt-right and “neoreaction”[4] and donated at least $250,000 to support Trump’s reelection in 2020 (Federal Election Commission 2018). He has also given to Trump allies including Missouri Senator Josh Hawley, who has repeatedly attacked Google and Facebook and sponsored multiple bills to regulate tech platforms, citing the threat that they pose to conservative speech.[5]

    Thiel’s affinity with Trumpism is not merely personal or cultural; it aligns with Palantir’s business interests. According to a 2019 report by Mijente, since Trump came into office in 2017, Palantir contracts with the United States government have increased by over a billion dollars per year. These include multiyear contracts with the US military (Judson 2019; Hatmaker 2019) and with Immigrations and Customs Enforcement (ICE) (MacMillan and Dwoskin 2019); Palantir has also worked with police departments in New York, New Orleans, and Los Angeles (Alden 2017; Winston 2018; Harris 2018).

    Karp and Thiel have both described these controversial contracts using the language of “nation” and “civilization.” Confronted by critical journalistic coverage (Woodman 2017, Winston 2018, Ahmed 2018) and protests  (Burr 2017, Wiener 2017), as well as internal actions by concerned employees (MacMillan and Dwoskin, 2019), Thiel and Karp have doubled down, characterizing the company as “patriotic,” in contrast to its competitors. In an interview conducted at Davos in January 2019, Karp said that Silicon Valley companies that refuse to work with the US government are “borderline craven” (2019b). At a speech at the National Conservatism Conference in July 2019, Thiel called Google “seemingly treasonous” for doing business with China, suggested that the company had been infiltrated by Chinese agents, and called for a government investigation (Thiel 2019a). Soon after, he published an Op Ed in the New York Times that restated this case (Thiel 2019b).

    However, Karp has cultivated a very different public image from Thiel’s, supporting Hillary Clinton in 2016, saying that he would vote for any Democratic presidential candidate against Trump in 2020 (Chafkin 2019), and—most surprisingly—identifying himself as a Marxist or “neo-Marxist” (Waldman et al. 2018, Mac 2017, Greenberg 2013). He also refers to himself as a “socialist” (Chafkin 2019) and according to at least one journalist, regularly addresses his employees on Marxian thought (Greenberg 2013). On one level, Karp’s dissertation clarifies what he means by this: For a time, he engaged deeply with the work of several neo-Marxist thinkers affiliated with the Institute for Social Research in Frankfurt. On another level, however, Karp’s dissertation invites further perplexity, because right wing movements, including Trump’s, evince special antipathy for precisely that tradition.

    Starting in the early 1990s, right-wing think tanks in both Germany and the United States began promoting conspiratorial narratives about critical theory. The conspiracies allege that, ever since the failure of “economic Marxism” in World War I, “neo-“ or “cultural Marxists” have infiltrated academia, media, and government. From inside, they have carried out a longstanding plan to overthrow Western civilization by criticizing Western culture and imposing “political correctness.” To the extent that it attaches to real historical figures, the story typically begins with Antonio Gramsci and György Lukács, goes through Max Horkheimer, Theodor Adorno, and other wartime émigrés to the United States, particularly those involved in state-sponsored mass media research, and ends abruptly with Herbert Marcuse and his influence on student movements of the 1960s (Moyn 2018; Huyssen 2017; Jay 2011; Berkowitz 2003).

    The term “Cultural Marxism” directly echoes the Nazi theory of “Cultural Bolshevism”; the early proponents of the Cultural Marxism conspiracy theory were more or less overt antisemites and white nationalists (Berkowitz 2003). However, in the 2000s and 2010s, right wing politicians and media personalities helped popularize it well beyond that sphere.[6] During the same time, it has gained traction in Silicon Valley, too.  In recent years, several employees at prominent tech firms have publicly decried the influence of Cultural Marxists, while making complaints about “political correctness” or lack of “viewpoint diversity.”[7]

    Thiel has long expressed similar frustrations.[8] So how is it that this prominent opponent of “cultural Marxism” works with a self-described neo-Marxist CEO? Aggression in the Life World casts light on the core beliefs that animate their partnership. The idiosyncratic adaptation of Western Marxism that it advances does not in fact place Karp at odds with the nationalist projects that Thiel has advocated, and Palantir helps enact. On the contrary, by attempting to render critical theoretical concepts “systematic,” Karp reinterprets them in a way that legitimates the work he would go on to do. Shortly before Palantir began developing its infrastructure for identification and authentication, Aggression in the Life-World articulated an ideology of these processes.

    Freud Returns to Frankfurt

    Tech industry legend has it that Karp wrote his dissertation under Jürgen Habermas (Silicon Review 2018; Metcalf 2016; Greenberg 2013). In fact, he earned his doctorate from a different part of Goethe University than the one in which Habermas taught: not at the Institute for Social Research but in the Division of Social Sciences. Karp’s primary reader was the social psychologist Karola Brede, who then held a joint appointment at Goethe University’s Sociology Department and at the Sigmund Freud Institute; she and her younger colleague Hans-Joachim Busch appear listed as supervisors on the front page. The confusion is significant, and not only because it suggests an exaggeration. It also obscures important differences of emphasis and orientation between Karp’s advisors and Habermas. These differences directly shaped Karp’s graduate work.

    Habermas did engage with psychoanalysis early in his career.  In the spring and summer of 1959, he attended every one of a series of lectures organized by the Institute for Social Research to mark the centenary of Freud’s birth (Müller-Doohm 2016, 79; Brede and Mitscherlich-Nielsen 1996, 391). He went on to become close friends and even occasionally co-teach  (Brede and Mitscherlich-Nielsen 1996, 395) with one of the organizers and speakers of this series, Alexander Mitscherlich, who had long campaigned with Frankfurt School founder Max Horkheimer for the funds to establish the Sigmund Freud Institute and became the first director when it opened the following year. In 1968, shortly after Mitscherlich and his wife, Margarete, published their influential book, The Inability to Mourn, Habermas developed his first systemic critical social theory in Knowledge and Human Interests (1972). Nearly one third of that book is devoted to psychoanalysis, which Habermas treats as exemplary of knowledge constituted by the “critical” or “emancipatory interest”—that is, the species interest in engaging in critical reflection in order to overcome domination. However, in the 1970s, Habermas turned away from that book’s focus on philosophical anthropology toward the ideas about linguistic competence that culminated in his Theory of Communicative Action; in 1994, Margarete Mitscherlich recounted that Habermas had “gotten over” psychoanalysis in the process of writing that book (1996, 399). Karp’s interest in the theory of the drives, and in aggression in particular, was not drawn from Habermas but from scholars at the Freud Institute, where it was a major focus of research and public debate for decades.

    Freud himself never definitively decided whether he believed that a death drive existed. The historian Dagmar Herzog has shown that the question of aggression—and particularly the question of whether human beings are innately driven to commit destructive acts—dominated discussions of psychoanalysis in West Germany in the 1960s and 1970s. “In no other national context would the attempt to make sense of aggression become such a core preoccupation,” Herzog writes (2016, 124). After fascism, this subject was highly politicized. For some, the claim that aggression was a primary drive helped to explain the Nazi past: if all humans had an innate drive to commit violence, Nazi crimes could be understood as an extreme example of a general rule. For others, this interpretation risked naturalizing and normalizing Nazi atrocities. “Sex-radicals” inspired by the work of Wilhelm Reich pointed out that Freud had cited the libido as the explanation for most phenomena in life. According to this camp, Nazi aggression had been the result not of human nature but of repressive authoritarian socialization. In his own work, Mitscherlich attempted to elaborate a series of compromises between the conservative position (that hierarchy and aggression were natural) and the radical one (that new norms of anti-authoritarian socialization could eliminate hierarchy entirely; Herzog 2016, 128-131). Klaus Horn, the long-time director of the division of social psychology at the Freud Institute, whose collected writings Karp’s supervisor Hans-Joachim Busch edited, contested the terms of the disagreement. The entire point of sophisticated psychoanalysis, Horn argued, was that culture and biology were mutually constitutive and interacted continuously; to name one or the other as the source of human behavior was nonsensical (Herzog 2016, 135).

    Karp’s primary advisor, Karola Brede, who joined the Sigmund Freud Institute in 1967, began her career in the midst of these debates (Bareuther et al. 1989, 713). In her first book, published in 1972, Brede argued that “psychosomatic” disturbances had to be understood in the context of socialization processes. Not only did neurotic conflicts play a role in somatic illness; such illness constituted “socio-pathological” expressions of an increase in the forms of repression required to integrate individuals into society (Brede 1972). In 1976, Brede published a critique of Konrad Lorenz, whose bestselling work, On Aggression, had triggered much of the initial debate with Alexander Mitscherlich and others at the Institute, in the journal Psyche (“Der Trieb als humanspezifische Kategorie”; see Herzog 2016, 125-7).  Since the 1980s, her monographs have focused on work and workplace sociology, and on the role that psychoanalysis should play in critical social theory. Individual and Work (1986) explored the “psychoanalytic costs involved in developing one’s own labor power.” The Adventures of Adjusting to Everyday Work (1995) drew on empirical studies of German workplaces to demonstrate that psychodynamic processes played a key role in professional life, shaping processes of identity formation, authoritarian behavior, and gendered self-identity in the workplace. In that book, Brede criticizes Habermas for undervaluing psychoanalytic concepts—and unconscious aggression in particular—as social forces. Brede argues that the importance that Habermas assigned to “intention” in Theory of Communicative Action prevented him from recognizing the central role that the unconscious played in constituting identity, action, and subjectivity (1995, 223 & 225). At the same time, she was editing multiple volumes on psychoanalytic theory, including feminist perspectives in psychoanalysis, and in a series of journal articles in the 1990s, developed a focus on antisemitism and Germany’s relationship to its troubled history (Brede 1995, 1997, 2000).

    During his time as a PhD student, Karp seems to have worked very closely with Brede. The sole academic journal article that he published he co-authored with her in 1997. (An analysis of Daniel Goldhagen’s bestselling 1996 study, Hitler’s Willing Executioners, the article attempted to build on Goldhagen’s thesis by characterizing a specific, “eliminationist” form of antisemitism that Karp and Brede argued could only be understood from the perspective of Freudian psychoanalytic theory; see Brede and Karp 1997, 621-6.) Karp wrote the introduction for a volume of the Proceedings of the Freud Institute, which Brede edited (Brede et al. 1999, 5-7). The chapter that Karp contributed to that volume would appear in his dissertation, three years later, in almost identical form. Karp’s dissertation itself also closely followed the themes of Brede’s research.

    Aggression in the Life World

    The full title of Karp’s dissertation captures its patchwork quality: Aggression in the Life-World: Expanding Parsons’ Concept of Aggression Through a Description of the Connection Between Jargon, Aggression, and Culture. “This work began,” the opening sentences recall, “with the observation that many statements have the effect of relieving unconscious drives, not in spite, but because, of the fact that they are blatantly irrational” (Karp 2002, 2). Karp proposes that such statements provide relief by allowing a speaker to have things both ways: to acknowledge the existence of a social order and, indeed, demonstrate specific knowledge of that order while, at the same time, expressing taboo wishes that contravene social norms. As result, rather than destroy social order, such irrational statements integrate the speaker into society while also providing compensation for the pains of being integrated. To describe these kinds of statements Karp indicates that he will borrow a concept from the late work of Adorno: “jargon.” However, Karp announces that he will critique Adorno for depending too much on the very phenomenological tradition that his Jargon of Authenticity is meant to criticize. Adorno’s concept is not a concept at all, Karp alleges, but a “reservoir for collecting Adorno-poetry” (Sammelbecken Adornoscher Dichtung) (2002, 58). Karp’s own goal is to clarify jargon into an analytical concept that could then be incorporated into a classical sociological framework. As synecdoche for classical sociology, Karp takes the work of Talcott Parsons.

    The second chapter of Karp’s dissertation, a reading and critique of Parsons, had appeared in the Freud Institute publication, Cases for the Theory of the Drives. In his editor’s introduction to that volume, Karp had stated that the goal of their group had been to integrate psychoanalytic concepts in general and Freud’s theory of the drives in particular into frameworks provided by classical sociology. The volume begins with an essay by Brede on the failure of sociology as a discipline to account for the role that aggression plays in social integration. (Brede 1999, 11-45, credits Georg Simmel with having developed an account of the active role that aggression played in creating social cohesion; more on that below.) Karp reiterates Brede’s complaint, directing it against Parsons, whose account of aggression he calls “incomplete” or “watered down” (2002, 11). In the version that appears in his dissertation, several sections of literature review establish background assumptions and describe what Karp takes to be Parsons’ achievement: integrating the insights of Émile Durkheim and Sigmund Freud. Taking, from Durkheim, a theory of how societies develop systems of norms, and from Freud, how individuals internalize them, Parsons developed an account of culture as the site where the integration of personality and society takes place.

    For Parsons, pace Karp, culture itself is best understood as a system constituted through “interactions.” Karp credits Parsons with shifting the paradigm from a subject of consciousness to a subject in communication—translating the Freudian superego into sociological form, so that it appears, not as a moral enforcer, but as a psychic structure communicating cultural norms to the conscious subject. Yet, Karp protests that there are, in fact, parts of personality not determined by culture, and not visible to fellow members of a culture so long as an individual does not deviate from established norms of interaction. Parsons’ theory of aggression remains incomplete on at least two counts, then. First, Karp argues, Parsons fails to recognize aggression as a primary drive, treating it only as a secondary result that follows when the pleasure principle finds itself thwarted. Karp, by contrast, adopts the position that a drive toward death or destruction is at least as fundamental as the pleasure principle. Second, because Parsons defines aggression in terms of harms to social norms, he cannot explain how aggression itself can become a social norm, as it did in Nazi Germany. For an explanation of how aggressive impulses come to be integrated into society, Karp turns instead to Adorno.

    In Adorno’s Jargon of Authenticity, Karp found an account of how aggression constitutes itself in language and, through language, mediates social integration (2002, 57). Adorno’s lengthy essay, which he had originally intended to constitute one part of Negative Dialectics, resists easy summary. The essay begins by identifying theological overtones that, Adorno says, emanate from the language used by German existentialists—and by Martin Heidegger in particular. Adorno cites not only “authenticity,” but terms like “existential,” “in the decision,” “commission,” “appeal,” and “encounter,” as exemplary” (3). While the existentialists claim that such language constitutes a form of resistance to conformity, Adorno argues that it has in fact become highly standardized: “Their unmediated language they receive from a distributor” (14). Making fetishes of these particular terms, the existentialists decontextualize language in several respects. They do so at the level of the sentence—snatching certain, favored words out of the dialectical progression of thought as if meaning could exist without it. At the same time, the existentialist presents “words like ‘being’ as if they were the most concrete terms” and could obviate abstraction, the dialectical movement within language. The function of this rhetorical practice is to make reality seem simply present, and give the subject an illusion of self-presence—replacing consciousness of historical conditions with an illusion of immediate self-experience. The “authenticity” generated by jargon therefore depends on forgetting or repressing the historically objective realities of social domination.

    Beyond simply obscuring the realities of domination, Adorno continues, the jargon of authenticity spiritualizes them.  For instance, Martin Heidegger turns the real precarity of people who might at any time lose their jobs and homes into a defining condition of Dasein: “The true need for residence consists in the fact that mortals must first learn to reside” (26). The power of such jargon—which transforms the risk of homelessness into an essential trait of Dasein—comes from the fact that it expresses human need, even as it disavows it. To this extent, jargon has an a- or even anti-political character: it disguises current and contingent effects of social domination into eternal and unchangeable characteristics of human existence. “The categories of jargon are gladly brought forward, as though they were not abstracted from generated and transitory situations but rather belonged to the essence of man,” Adorno writes. “Man is the ideology of dehumanization” (48). Jargon turns fascist insofar as it leads the person who uses it to perceive historical conditions of domination—including their own domination—as the very source of their identity. “Identification with that which is inevitable remains the only consolation of this philosophy of consolation.” Adorno writes. “Its dignified mannerism is a reactionary response to the secularization of death” (143, 144).

    Karp says at the outset that his goal is to make Adorno’s collection of observations about jargon “systematic.” In order to do so, he approaches the subject from a different perspective than Adorno did: focused on the question of what psychological needs jargon fulfills. For Karp, the achievement of jargon lies in its “double function” (Doppelfunktion). Jargon both acknowledges the objective forces that oppress people and allows people to adapt or accommodate themselves to those same forces by eternalizing them—removing them from the context of the social relations where they originate, and treating them as features of human existence in general. Jargon addresses needs that cannot be satisfied, because they reflect the realities of living in a society characterized by domination, but also cannot be acted upon, because they are taboo. For Karp, insofar as jargon is a kind of speech that designates speakers as belonging to an in-group, it also expresses an unconscious drive toward aggression. In jargon we see the aggression that drives individuals to exclude others from the social world doing its binding work. It is on these grounds that Karp argues that aggression is a constitutive part of jargon—its ever-present, if unacknowledged, obverse.

    Karp grants that Adorno is concerned with social life. The Jargon of Authenticity investigates precisely the social function of ontology, or how it turns “authenticity” into a cultural form, circulated within mass culture. Adorno also alludes to the specifically German inheritance of jargon—the resemblance between Heidegger’s celebration of völkisch rural life and Nazi celebration of the same (1973, 3). Yet, Karp argues, Adorno does not provide an account of how a deception or illusion of authenticity came to be a structure in the life-world. Even as he criticizes phenomenological ontology, Adorno relies on a concept of language that is itself phenomenological. Echoing critiques by Axel Honneth (1991) of Horkheimer and Adorno’s failures to account for the unique domain of “the social,” Karp turns to the same thinkers Karola Brede used in her article on “Social Integration and Aggression”: Sigmund Freud and Georg Simmel.

    In that article, Brede develops a reading that joins Freud and Simmel’s accounts of the role of the figure of “the stranger” in modern societies. In Civilization and its Discontents, Brede argues, Freud described “strangers” in terms that initially appear incompatible with the account Simmel had put forth in his famous 1908 “Excursus on the Stranger.” Simmel described the mechanisms whereby social groups exclude strangers in order to eliminate danger—thereby controlling the “monstrous reservoir of aggressivity” that would otherwise threaten social structure. (The quote is from Parsons.) Freud wrote that, despite the Biblical commandment to love our neighbors, and the ban on killing, we experience a hatred of strangers, because they make us experience what is strange in us, and fear what in them cannot be fit into our cultural models. Brede concludes that it is only by combining Freudian psychodynamics with Simmel’s account of the role of exclusion in social formation that critical social theory could account for the forms of violence that dominated the history of the twentieth century (Brede 199, 43).

    Karp contrasts Adorno with both Freud and Simmel, and finds Adorno to be more pessimistic than either of these predecessors. Compared to Freud, who argued that culture successfully repressed both libidinal and destructive drives in the name of moral principles, Karp writes that Adorno regarded culture as fundamentally amoral. Rather than successfully repressing antisocial drives, Karp writes, late capitalist culture sates its members with “false satisfactions.” People look for opportunities to express their needs for self-preservation. However, since they know that their needs cannot be fully satisfied, they simultaneously fall over themselves to destroy the memory of the false fulfillment they have had. Repressed awareness of the false nature of their own satisfaction produces the ambient aggression that people take out on strangers.

    For Simmel, the stranger is part of all modern societies, Karp writes. For Adorno, the stranger extends an invitation to violence. Jargon gains its power from the fact that those who speak, and hear, it really are searching for a lost community. The very presence of the stranger demonstrates that such community cannot be simply given; jargon is powerful precisely in proportion to how much the shared context of life has been destroyed.  It therefore offers a “dishonest answer to an honest longing” for intersubjectivity, gaining strength in proportion to the intensity the need that has been thwarted (Karp 2002, 85).  Wishes that contradict social norms are brought into the web of social relations (Geflecht der Lebenswelt), in such a way that they do not need to be sanctioned or punished for violating social norms (91). On the contrary, they serve to bind members of social groups to one another.

    Testing Jargon

    As a case study to demonstrate the usefulness of his modified concept of jargon, Karp takes up a notorious episode in post-wall German intellectual history: a speech that the celebrated novelist Martin Walser gave in October 1998, at St. Paul’s Church in Frankfurt. The occasion was Walser’s acceptance of the 1998 Peace Prize of the German Book Trade. The novelist had traveled a complex political itinerary by the late 1990s. Documents released in 2007 would uncover the fact that as a teenager, during the final years of the Second World War, Walser joined the Nazi Party and fought as a member of the Wehrmacht. But he first became publicly known as a left-wing writer. In the 1950s, Walser attended meetings of the informal but influential German writer’s association Gruppe 47 and received their annual literary prize for his short story, “Templones Ende”; in 1964 he attended the Frankfurt Auschwitz trials, where low ranking officials were charged and convicted for crimes that they had perpetrated during the Holocaust. In his 1965 essay about that experience, “Our Auschwitz,” Walser insisted on the collective responsibility of Germans for the horrors of the Nazi period; indeed he criticized the emphasis on spectacular cruelty at the trial, and in the media, to the extent that this emphasis allowed the public to maintain an imaginary distance between themselves and the Nazi past (Walser 2015, 217-56). Walser supported Social Democratic Party member Willy Brandt for Chancellor and even joined the German Communist Party during that decade. By the 1980s, however, Walser was widely perceived to have migrated back to the right. And when he gave his speech “Experiences Composing a Sermon” on the sixtieth anniversary of Kristallnacht, he used the occasion to attack the public culture of Holocaust remembrance. Walser described this culture as a “moral cudgel” or “bludgeon” (Moralkeule).

    “Experiences Composing a Sermon” adopts a stream of consciousness, rather than argumentative, style in order to explain why Walser refused to do what he said was expected of him: to speak about the ugliness of German history. Instead, he argued that no further collective memorialization of the Holocaust was necessary. There was no such thing, he said, as collective or shared conscience at all: conscience should be a private matter. Critics and intellectuals he disparaged as “preachers” were “instrumentalizing” and “vulgarizing” memory, when they exhorted the public constantly to reflect on the crimes of the Nazi period. “There is probably such a thing as the banality of good,” Walser quipped, echoing Hannah Arendt (2015, 513). He did not spell out what ends he thought that these “preachers” aimed to instrumentalize German guilt for. He concluded by abruptly calling on the newly elected president Roman Herzog, who was in attendance, to free the former East German spy, Rainer Rupp, from prison. Walser’s speech received a standing ovation—though not, notably, from Ignatz Bubis, then the president of the Central Council of Jews in Germany, who was also in attendance. The next day, in the Frankfurter Allgemeine Zeitung, Bubis called the speech an act of “intellectual arson” (geistiges Brandstiftung). The controversy that followed generated a huge amount of debate among German intellectuals and in the German and international media (Cohen 1998). Two months later, the offices of the Frankfurter Allgemeine Zeitung hosted a formal debate between the two men. It lasted for four hours. FAZ published a transcript of their conversation in a special supplement (Walser and Bubis 1999).

    In February and March 1999, Karola Brede delivered two lectures about the controversy at Harvard University, which she subsequently published in Psyche (2000, 203-33). Brede examined both the text of Walser’s original speech and the transcript of his debate with Bubis in order to determine, first, why Walser’s speech had been received so enthusiastically, and second, whether Walser, despite eschewing explicitly antisemitic language, had in fact “taken the side of anti-Semites.” In order to explain why Walser’s speech had attracted so much attention, Brede carried out a close textual analysis. She found that, although Walser had not presented a very cogent argument, he had successfully staged a “relieving rhetoric” (Entlastungsrhetorik) that freed his audience from the sense of awkwardness or self-consciousness that they felt talking about Auschwitz in public and replaced these negative feelings with a positive sense of heightened self-regard. Brede argued that Walser used jargon, in the sense of Adorno’s “jargon of authenticity,” in order to flatter listeners into thinking that they were taking part in a daring intellectual exercise, while in fact activating anti-intellectual feelings. (In a footnote she recommended an “unpublished paper” by Karp, presumably from his dissertation, for further reading; Brede 2000, 215). She concluded that indeed Walser had taken the side of antisemites because, in both his speech and his subsequent debate with Bubis, he constructed a point of identification for listeners (“we Germans”) that systematically excluded German Jews (203). By organizing his speech entirely around “perpetrators” and the “critics” who shamed them, Walser elided the perspective of the Nazi’s victims. Invoking Simmel’s essay on “The Stranger” again, Brede argued that Walser’s behavior during his debate with Bubis offered a model of how unconscious aggression could drive social integration through exclusion. Regardless of what Walser said he felt, to the extent that his rhetoric excluded Bubis from his definition of “we Germans” as a Jew, his conduct had been antisemitic.

    In the final chapter of his dissertation, Karp also offers a reading of Walser’s prize acceptance speech, arguing that Walser made use of jargon in Adorno’s sense. Like Brede, Karp bases his argument on close textual analysis. He catalogs several specific literary strategies that, he says, enabled Walser to appeal to the unconscious or repressed emotions of his listeners without having to convince them. First, Karp tracks how Walser played with pronouns in the opening movement of the speech in order to eliminate distance and create identification between himself and his audience. Walser shifted from describing himself in the third person singular (the “one who had been chosen” for the prize) to the first-person plural (“we Germans”). At the same time, by making vague references to intellectuals who had made public remembrance and guilt compulsory, Walser created the sensation that he and the listeners he has invited to identify with his position (“we”) were only responding to attacks from outside—that “we” were the real victims. (In her article, Brede had quipped that this narrative of victimhood “could have come from a B-movie Western”; Brede 2000, 214). Through this technique, Karp writes, Walser created the impression that if “we” were to strike back against the “Holocaust preachers,” this would only be an act of self-defense.

    Karp stresses that the content of “Experiences Composing a Sermon” was less important than the effect that these rhetorical gestures had of making listeners feel that they belonged to Walser’s side. In the controversy that followed Walser’s acceptance speech, critics often asked which “intellectuals” he had meant to criticize; these critics, Karp says, missed the point. It was not the content of the speech, but its form, that mattered. It was through form that Walser had identified and addressed the psychological needs of his audience. That form did not aim to convince listeners; it did not need to. It simply appealed to (repressed) emotions that they were already experiencing.

    For Adorno, the anti-political or fascist character of jargon was directly tied to the non-dialectical concept of language that jargon advanced. By eliminating abstraction from philosophical language, and detaching selected words from the flow of thought, jargon made absent things seem present. By using such language, existentialism attempted to construct an illusion that the subject could form itself outside of history. By raising historically contingent experiences of domination to defining features of the human, jargon presented them as unchangeable. And by identifying humanity itself with those experiences, it identified the subject with domination.

    Karp does not demonstrate that Walser’s “jargon” performed any of these functions, precisely. Rather, he focuses on the psychodynamics motivating his speech. Karp proposes that the pain (Leiden) that Walser’s speech expressed resembled the “domination” (Zwang) that Adorno recognized in jargon. While Adorno’s jargon made the absent or abstract seem present, through an act of linguistic fetishization, Walser’s jargon embodied the obverse impulse: to wish the discomfort created by the presence of history’s victims away.

    Karp is less concerned with the history of domination, that is, than with Freudian drives. For Adorno, the purpose of carrying out a determinate negation of jargon was to create the conditions of possibility for critical theory to address the real needs to which jargon constituted a false response. For Karp, the interest of the project is more technical: his goal is to uncover forms and patterns of speech that admit aggression into social life and give it a central role in consolidating identity. By combining culturally legitimated expressions with taboo ones, Karp argues, Walser created an environment in which his controversial opinion could be accepted as “obvious” or “self-evident” (selbstverständlich) by his audience. That is, Walser created a linguistic form through which aggression could be integrated into the life-world.

    Unlike Adorno (or Brede), Karp refrains from making any normative assessment of this achievement. His “systematization” of the concept of jargon empties that concept of the critical force that Adorno meant for it to carry. If anything, the tone of the final pages of Aggression in the Life-World is forgiving. Karp concludes by arguing that Walser was not necessarily aware of the meaning of his speech—indeed, that he probably was not. By allowing his audience to express their taboo wishes to be done with Holocaust remembrance, Karp writes, Walser convinced them that, “these taboos should never have existed.” Then he cuts to his bibliography.

    Grand Hotel California Abyss

    The abruptness of the ending of Aggression in the Life-World is difficult to interpret. At one level, Karp’s apparent lack of interest in the ethical and political implications of his case study reflects his stated goals and methods. From the beginning, he has set out to reveal that the social is constituted through acts of unconscious aggression, and that this aggression becomes legible in specific linguistic interactions, rather than to evaluate the effects of aggression itself. Reading Walser, Karp explicitly privileges form over content, treating the former as symptomatic of unstated meanings and effects. Granting the critic authority over the text he is analyzing, such an approach presumes the author under analysis to be ignorant, if not innocent, of what he really has at stake; it treats conscious attitudes and overt arguments as holding, at most, a secondary interest. At another level, the banal explanations for Karp’s tone and brevity may be the most plausible. He was writing in a non-native language; like many graduate students, he may have finished in haste.[9] In any case, his decision to eschew the kinds of judgments made by both his subject, Adorno, and his mentor, Brede is striking—all the more so because Karp is descended from German Jews and “grew up in a Jewish family” (Karp 2019a). This choice reflects a different mode of engagement with critical theory than scholars of either digital media or digitally mediated right-wing movements have observed.

    Historians have shown that the Frankfurt School critiques of mass media helped shape the idea that digital media could constitute a more democratic alternative. Fred Turner has argued that the research Adorno conducted on the role of radio and cinema in shaping the authoritarian personality, as well as the proximity of Frankfurt School scholars to the Bauhaus and other practicing artists, generated a set of beliefs about the democratic character of interactivity (Turner 2013). Orit Halpern is more critical of the essentially liberal assumptions of media and technology critique in which she, too, places Adorno (2015, 18-19). However, like Turner, Halpern identifies the emergence of interactivity as a key epistemic shift away from the Frankfurt School paradigm that opposed “attention” and “distraction.”  Cybernetics redefined the problem of “spectatorship” by transforming the spectator from an individual into a site of perceptions and cognitions—an “interface or infrastructure for information processing.” Where radio, cinema, and television had promoted conformity and passivity, cybernetic media promised to facilitate individual choice and free expression (2015, 224-6).

    More recently, critics and scholars attempting to account for the phobic fascination that new right-wing movements show for “cultural marxism” have analyzed it in a variety of ways. The least sophisticated take at face value the claims of “alt-right” figures that they are only reacting to the ludicrous and pernicious excesses of their opponents.[10] More substantial interpretations have described the far right fixation on the Frankfurt School as a “dialectic of counter-Enlightenment” or form of “inverted appropriation.” Martin Jay (2011) and Andreas Huyssen (2017, 2019) both argue that the attraction of critical theory for the right lies in the dynamics of projection and disavowed recognition that it sets in motion. As Huyssen puts it, “wider circles of American white supremacists and their publications… have been drawn to critique and deconstruction because, on those traditions, they project their own destructive and nihilistic tendencies” (2017).

    Aggression in the Life World does none of these things. Karp’s dissertation does not take up the critiques of mass media or the authoritarian personality that were canonized in the Anglo-American world at all, much less use them to develop democratic alternatives. Nor does it project its own penchant for destruction onto its subjects. In contrast with the “lunatic fringe” (Jay, 30) Karp does not carry out an “inverted appropriation” of critical theory, so much as a partial one.  He adapts Frankfurt School concepts for technical purposes, making them more instrumentally useful to the disciplines of sociology or social psychology by abstracting them from their contexts. In the process, he also abandons the Frankfurt School commitment to emancipation. It is at this level of abstraction that his neo-Marxism—from which Marx and materialism have all but disappeared—can coexist with the nationalism that he and Thiel invoke to defend Palantir.

    I asked at the beginning of this paper what beliefs Karp shares with Peter Thiel and what their common commitments might reveal about the self-consciously “contrarian” or “heterodox” network of actors that they inhabit. One answer that Aggression in the Life World makes evident is that both men regard the desire to commit violence as a constant, founding fact of human life. Both also believe that this drive expresses itself in social forms like language or group structure, even if speakers or group members remain unaware of their own motivations. These are ideas that Thiel attributes to the work of the eclectic French theorist René Girard, with whom he studied at Stanford, and whose theories of mimetic desire, scapegoating, and herd mentality he has often cited. In 2006 Thiel’s nonprofit foundation established an institute to promote the study of Girard and support the further development of mimetic theory; this organization, Imitatio, remains one of the foundation’s three major projects (Daub 2020, 97-112).

    The text that Karp chose to analyze, as his case study, also shares a set of concerns with Thiel’s writings and statements against campus multiculturalism and political correctness; Walser’s speech became a touchstone of debates about historical memory in Germany, in which the newly imported Americanism politische Korrektheit circulated widely. In his dissertation, Karp does not celebrate Walser’s taboo speech in the same way that Thiel and his associates have sometimes celebrated violations of speech norms.[11] However, he does assert that jargon, and the unconscious aggression that it expresses, plays a role in the formation of all social groups, and refrains from evaluating whether Walser’s jargon was particularly problematic. Of course, the term “jargon” itself became a commonplace during the U. S. culture wars in the 1980s and 1990s, used to accuse academics and university administrators who purported to be speaking for vulnerable populations of in fact deploying obscure terms to aggrandize themselves. Thiel and his co-author David O. Sacks devote a chapter of The Diversity Myth to an account of how the vagueness of the word “multiculturalism” enabled activists and administrators at Stanford to use it in this manner (1995, 23-49). The idea that such terms express ressentiment and a will to power is consistent with the theoretical framework that Karp went on to develop.

    Ironically, by attempting to expunge jargon of its subjective or impressionistic content, Karp renders it less materially objective. Rather than locating jargon in specific experiences of modernity, he transforms it into an expression of drives that, because they are timeless, are merely psychological. Karp makes a version of the eternalizing move that Adorno criticizes in Heidegger, in other words. Rather than elevating precarity into the essence of the human, Karp makes aggressive violence the substance of the social. In the process, he empties the concept of jargon of its critical power. When he arrives at the end of Walser’s speech, a speech that Karp characterizes as consolidating community based on unspeakable aggression, he can conclude only that it was effective.

    A still greater irony in retrospect may be how, in Karp’s telling, Adorno’s jargon anticipates the software tools Palantir would develop. By tracing the rhetorical patterns that constitute jargon in literary language, Karp argues that he can reveal otherwise hidden identities and affinities—and the drive to commit violence that lies latent in them. By looking back to Adorno, he points toward a possible critique of big data analytics as a kind of authenticity jargon. That is, a way of generating and eternalizing false forms of selfhood. In data analysis, the role of the analyst is not to demystify and dispel reification. On the contrary, it is precisely to fix identity from its digital traces and to make predictions on the basis of the same. For Adorno, jargon is a form of language that seems to authenticate identity—but only seems to. The identities it makes available to the subject are based on an illusion that jargon sustains by suppressing the self-difference that historicity introduces into language. The illusion it offers is of timeless “human” experience. It covers for domination insofar as it makes the human condition—or rather, human conditions as they are at the time of speaking—appear unchangeable.

    Big data analytics could be said to constitute an authenticity jargon in this sense: although they treat the data set under analysis as having something like an unconscious, they eliminate the temporal gaps and spaces of ambiguity that drive psychoanalytic interpretation. In place of interpretation, data analytics substitutes correlations that it treats simply as given. To a machine learning algorithm that has been trained on data sets that include zip codes and rates of defaulting on mortgage payments, for instance, it does not matter why mortgagees in a given zip code may have been more likely to default in the past. Nor will the algorithm that recommends rejecting a loan application necessarily explain that the zip code was the deciding factor. Like the existentialist’s illusion of immediate experience these procedures generate an aura of incontestable self-evidence.

    As in Adorno, here, the loss of particular contexts can serve to conceal, and thus perpetuate, domination. Algorithms take the histories of oppression embedded in training data and project them into the future, via predictions that powerful institutions then act on. If the identities constituted in this way are false, the reifications they generate do real work, and can cause real harm. And yet, to read these figures historically is to recognize that they need not come true. This is not an interpretive path that Karp pursues. But for those of us concerned about the relationship between digital technologies and justice, this repressed insight of his dissertation is the most critical to follow.

    _____

    Moira Weigel is a Junior Fellow at the Harvard Society of Fellows and an editor and cofounder of Logic Magazine. She received her PhD from the combined program in Comparative Literature and Film and Media Studies at Yale University in 2017.

    Back to the essay

    _____

    Notes

    [1] Translations from German are mine unless otherwise noted.

    [2] In 2017, when activists doxxed the founder of the neofascist blog the Right Stuff and the antisemitic podcasts Fash the Nation and The Daily Shoah, who went by the alias Mike Enoch, they revealed that he was in fact a programmer named Michael Peinovich (Marantz 2019, 275-9). Curtis Yarvin, who wrote a widely read blog advocating the end of democracy under the name Mencius Moldbug, also worked as a software engineer (Gray 2017). Several journalists have documented the interest that figures in or adjacent to the tech industry evince with Yarvin’s Neoreaction (NRx) or Dark Enlightenment (Gray 2017; Goldhill 2017). Prominent white nationalist media entrepreneurs also claim to have substantial followings in the tech industry. In 2017, Andrew Anglin told a Mother Jones reporter that Santa Clara County was the highest source of inbound traffic to his website, The Daily Stormer; Chuck Johnson said the same about his (now defunct) website Got News (Harkinson 2017). In response to an interview question about his “average” supporter, the white nationalist Richard Spencer claimed that, “many in the Alt-Right are tech savvy or actually tech professionals” (Hawley 2017, 78).

    [3] James Damore, the engineer who wrote the July 2017 memo, “Google’s Ideological Echo Chamber,” and was subsequently fired, toured the right wing speaking circuit (Tiku 2019, 85-7). Brian Amerige, the Facebook engineer who identified himself to the New York Times in July 2018 as the creator of a conservative group on Facebook’s internal forum, Workplace, and then left the company, did the same (Conger and Frankel 2018). Shortly after, it was reported that Oculus cofounder Palmer Luckey’s departure from the company in 2017 had also been driven by conflicts with management over his support of Donald Trump (Grind and Hagey 2018); Luckey has since publicly claimed to speak on behalf of a silent majority of “tech conservatives” (Luckey 2018). Arne Wilberg, a long time recruiter of technical employees for Google and YouTube, filed a reverse discrimination suit in 2018, alleging that he had been fired for “opposing illegal hiring practices… systematically discriminating in favor of job applicants who are Hispanic, African American, or female, against Caucasian and Asian men” (Wilberg v. Google 2018). Most recently, in August 2019, The Wall Street Journal reported that the former Google engineer Kevin Cernekee had been fired in 2017 in retaliation for expressing “conservative” viewpoints on internal listservs (Copeland 2019). Former colleagues subsequently published screenshots showing that, among other things, Cernekee had proposed raising money for a bounty for finding the masked protestor who punched Richard Spencer at the Presidential inauguration in 2017 using WeSearchr, the now-defunct fundraising platform run by Holocaust “revisionist” Chuck C. Johnson. They also shared screenshots showing that Cernekee had defended two neo-Nazi organizations, The Traditionalist Workers Party and Golden State Skinheads, suggesting that they should “rename themselves to something normie-compatible like ‘The Helpful Neighborhood Bald Guys’ or the ‘Open Society Institute’” (Wacker 2019; Tiku 2019, 84). Like Damore, Amerige, and Wilberg, Cernekee received national media coverage.

    [4] For instance, emails that BuzzFeed reporter Joe Bernstein obtained from Breitbart.com stated that Thiel invited Curtis Yarvin to watch the 2016 election results at his home in Hollywood Hills, where he had previously hosted Breitbart tech editor Milo Yiannopoulos; New Yorker writer Andrew Marantz reported running into Thiel at the “DeploraBall” that took place on the eve of Trump’s inauguration (2019, 47-9).

    [5] Thiel supported Hawley’s campaign for Attorney General of Missouri in 2016 (Center for Responsive Politics); in that office, Hawley initiated an antitrust investigation of Google (Dave 2017) and a probe into Facebook exploitation of user data (Allen 2018). Thiel later donated to Hawley’s 2018 Senate campaign (Center for Responsive Politics); in the Senate, Hawley has sponsored multiple bills to regulate tech platforms (US Senate 2019a, 2019b, 2019c, 2019d, 2019e, 2019f, 2019g). These activities earned him praise from Trump at a White House Social Media Summit on the theme of liberal bias at tech companies, where Hawley also spoke (Trump 2019a).

    [6] Pat Buchanan devoted a chapter to the subject, entitled “The Frankfurt School Comes to America,” in his 2001 Death of the West. Breitbart editor Michael Walsh published an entire book about critical theory, in which he described it as “the very essence of Satanism” (Walsh 2016, 50). Andrew Breitbart himself devoted a chapter to it in his memoir (Breitbart 2011, 113). Jordan Peterson more often rails against “postmodernism,” or “political correctness.” However, he too regularly refers to “Cultural Marxism”; at time of writing, an explainer video that he produced for the pro-Trump Epoch Times, has tallied nearly 750,000 views on YouTube (Peterson 2017).

    [7] The memo that engineer James Damore circulated to his colleagues at Google presented a version of the Cultural Marxism conspiracy in its endnotes, as fact. “As it became clear that the working class of the liberal democracies wasn’t going to overthrow their ‘capitalist oppressors,’” Damore wrote, “the Marxist intellectuals transitioned from class warfare to gender and race politics” (Conger 2017). The group that Brian Amerige started on Facebook Workplace was called “Resisting Cultural Marxism” (Conger and Frankel 2018).

    [8] The Stanford Review, which Thiel founded late in his sophomore year and edited throughout his junior and senior years at the university, devoted extensive attention to questions of speech on Stanford’s campus, which became a focal point of the US culture wars and drew international media attention when the academic senate voted to (slightly) revise its core curriculum in 1988 (see Hartman 2019, 227-30). In 1995, with fellow Stanford alumnus (and later PayPal Chief Operating Officer) David O. Sacks, Thiel published The Diversity Myth, a critique of the “debilitating” effects of “political correctness” on college campuses that, among other things, compared multicultural campus activists to “the bar scene from Star Wars” (xix). In 2018 he moved to Los Angeles, saying that political correctness in San Francisco had become unbearable (Peltz and Pierson 2018; Solon 2018) and in 2019 Founders Fund, the venture capital firm where he is a partner, announced that they would be sponsoring a conference to promote “thoughtcrime” (Founders Fund 2019).

    [9] Aggression in the Life World is significantly shorter than either of the other two dissertations submitted to the sociology department at Frankfurt that year: Margaret Ann Griesese’s The Brazilian Women’s Movement Against Violence clocked in at 314 pages, and Konstantinos Tsapakidis, Collective Memory and Cultures of Resistance in Ancient Greek Music at 267; Karp’s is 129.

    [10] Angela Nagle (2017) put forth an extreme version of this argument, arguing that the excesses of “social justice warrior” identity politics provoked the formation of the alt-right and that trolls like Milo Yiannopoulos were only replicating tactics of “transgression” that had been pioneered by leftist intellectuals like bell hooks and institutionalized on liberal campuses and in liberal media. Kakutani similarly argued that the Trumpist right was simply taking up tactics that the relativism of “postmodernism” had pioneered in the 1960s (2018, 18).

    [11] In The Diversity Myth Sacks and Thiel describe on instance of resistance to the Stanford speech code, which was adopted in May 1990 and revoked in March 1995, as heroic. The incident took place on the night of January 19, 1992, when three members of the Alpha Epsilon Pi fraternity, Michael Ehrman, Keith Rabois, and Bret Scher, were walking home from a party through one of Stanford’s residential dormitories. Rabois, then a first year law student, began shouting slurs at the home of a resident tutor in the dormitory, who had been involved in the expulsion of Ehrman’s brother Ken from residential housing four years earlier, after Ken called the resident tutor assigned to him a “faggot.” “Faggot! Hope you die of AIDS!” Rabois shouted. “Can’t wait until you die, faggot.” He later confirmed and defended these statements in a letter to the Stanford Daily. “Admittedly, the comments made were not very articulate, nor very intellectual nor profound,” he wrote. “The intention was for the speech to be outrageous enough to provoke a thought of ‘Wow, if he can say that, I guess I can say a little more than I thought.” The speech code, which had not until that point been used to punish any student, was not used to punish Rabois; however, Thiel and Sacks describe the criticism of Rabois from administrators and fellow students that followed as a “witch hunt” (1995, 162-75). Rabois subsequently transferred to Harvard but later worked with Thiel at PayPal and later as a partner at Founders Fund. More recently, the blog post that Founders Fund published to announce the Hereticon conference cited in Footnote 8, described violating taboos on speech as its goal: “Imagine a conference for people banned from other conferences. Imagine a safe space for people who don’t feel safe in safe spaces. Over three nights we’ll feature many of our culture’s most important troublemakers in the fields of knowledge necessary to the progressive improvement of our civilization” (2019).

    _____

    Works Cited

  • Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Sareeta Amrute

    Shoshana Zuboff’s The Age of Surveillance Capitalism begins badly: the author’s house burns down. Her home is struck by lightning, it takes Zuboff a few minutes to realize the enormity of the conflagration happening all around her and escape. The book, written after the fire goes out, is a warning about the enormity of the changes kindled while we slept. Zuboff describes a world in which autonomy, agency, and privacy–the walls of her house–are under threat by a corporate apparatus that records everything in order to control behavior. That act of monitoring and recording inaugurates a new era in the development of capitalism that Zuboff believes is destructive of both individual liberty and democratic institutions.

    Surveillance Capitalism  is the alarm to all of us to get out of the house, lest it burn down all around us. In making this warning however, Zuboff discounts the long history of surveillance outside the middle class enclaves of Europe and the United States and assumes that protecting the privacy of individuals in that same location will solve the problem of surveillance for the Rest.

    The house functions as a metaphor throughout the book, first as a warning about how difficult it is to recognize a radical remaking of our world as it is happening: this change is akin to a lightning strike. The second is as an indicator of the kind of world we inhabit: it is a world that could be enhancing of life, instead it treats life as a resource to be extracted. The third uses the idea of house as protection to solve the other two problems.

    Zuboff contrasts an early moment of the digitally connected world, an internet of things that was on a closed circuit within one house, to the current moment, where the same devices are wired to the companies that make them. For Zuboff, that difference demonstrates the exponential changes that happened in between the early promise of the internet and its current malformation. Surveillance Capital argues that from the connective potential of the early Internet has come the current dystopian state of affairs, where human behavior is monitored by companies in order to nudge that behavior toward predetermined ends. In this way, Surveillance Capitalism reverses an earlier moment of connectivity boosterism, exemplified by the title of Thomas Friedman’s popular 2005 book, The World is Flat, which celebrated technologically-produced globalization.[1] The decades from the mid to late 2000s witnessed a significant critique of the flat world hypothesis, which could be summed up as an argument for both the vast unevenness of the world, and for the continuous remaking of global tropes into local and varied meanings. Yet, here we are again it seems in 2020, except instead of celebrating flatness, we are sounding the flat alarm.

    The book’s very dimensions–it is a doorstop, on purpose–act as an inoculation against the thinness and flatness Zuboff diagnoses as predominant features of our world. Zuboff argues that these features are unprecedented, that they mark an extreme deviation from capitalism as it has been. They therefore require both a new name and new analytic tools. The name
    “surveillance capitalism” describes information-gathering enterprises that are unprecedented in human history, and that information, Zuboff writes, is used to predict “our futures for the sake of others’ gain, not ours” (11). As tech companies increasingly use our data to steer behavior towards products and advertising, our ability to experience a deep interiority where we can exercise autonomous choice shrinks. Importantly for Zuboff, these companies collect not just data willingly giving, but the data exhaust that we often unknowingly and unintentionally emit as we move through a world mediated by our devices. Behavioral nudges mark for Zuboff the ultimate endpoint for a capitalism gone awry, a capitalism drives humans to abandon free will in favor of being governed by corporations that use aggregate data about individual interactions to determine future human action.

    Zuboff’s flat alarm usefully takes the reader through the philosophical underpinnings of behaviorism, following the work of B.F. Skinner, a psychologist working at Harvard in the mid-twentieth century who believed adjusting human behavior was a matter of changing external environments through positive and negative stimuli, or reinforcements. Zuboff argues that behaviorist attitudes toward the world, considered outré in their time, have moved to the heart of Silicon Valley philosophies of disruption, where they meet a particular kind of mode of capital accumulation driven by the logics of venture, neutrality, and macho meritocracies. The result is a kind of ideology of tools and of making humans into tools, that Zuboff terms instrumentarianism, at once driven to produce companies that are profitable for venture capitalists and investors and to treat human beings as sources of data to be turned toward profitability. Widespread surveillance is a necessary feature of this new world order because it is through that observation of every detail of human life that these companies can amass the data they need to turn a profit by predicting and ultimately controlling, or tuning, human behavior.

    Zuboff identifies key figures in the development of surveillance capitalism, including the aforementioned Skinner. Her particular mode of critique tends to focus on CEOs, and Zuboff reads their pronouncements as signs of the legacy of behaviorism in the C-Suites of contemporary firms. Zuboff also spends several chapters situating the critics of these surveillance capitalists as those who need to raise the flat world alarm. She compares this need to both her personal experience with the house fire and the experience of thinkers such as Hanah Arendt writing on totalitarianism. Here, she draws an explicit critique that conjoins totalitarianism and surveillance capital. Zuboff argues that just as totalitarianism was unthinkable as it was unfolding, so too does surveillance capitalism seem an impossible future given how we like to think about human behavior and its governance. Zuboff’s argument here is highly persuasive, since she is suggesting that the critics will always come to realize what it is they are critiquing just before it is too late to do anything about it. She also argues that behaviorism is in some sense the inverse of state-governed totalitarianism, since while totalitarianism attempted to discipline humans from the inside out, surveillance capitalism is agnostic when it comes to interiority–it only deals in and tries to engineer surface effects. For all this ‘neutrality’ over and against belief, it is equally oppressive, because it aims at social domination.

    Previous reviews have provided an overview of the chapters in this book; I will not repeat the exercise, except to say that the introduction nicely lays out her overall argument and could be used effectively to broach the topic of surveillance for many audiences. The chapters outlining B.F. Skinner’s imprint on behaviorist ideologies are also useful to provide historical context to the current age, as is the general story of Google’s turn toward profitability as told in Part I. And, yet, the promise of these earlier chapters–particularly the nice turn of phrase, the “‘behavioral means of production” yield in the latter chapters to an impoverished account of our options and of the contradictions at work within tech companies. These lacunae are due at least in part to Zuboff’s choice of revolutionary subject–the middle class consumer.

    Toward the end of Surveillance Capitalism, Zuboff rebuilds her house, this time with thicker walls. She uses her house’s regeneration to argue for a philosophical concept she calls the “right to sanctuary,” based largely on the writings of Gaston Bachelard, whose Poetics of Space describes for Zuboff how the shelter of home shapes “many of our most fundamental ways of making sense of experience” (477). Zuboff believes that surveillance capitalists want to bring down all these walls, for the sake of opening up our every action to collection and our every impulse to guidance from above. One might pause here and wonder whether the breaking down of walls is not fundamental to capitalism from the beginning, rather than an aberration of the current age. In other words, does the age of surveillance mark such a radical break from the general thrust of capital’s need to open up new markets and exploit new raw materials? Or, more to the point, for whom does it signify a radical aberration?  Posing this question would bring into focus the need to interrogate the complicitness of the very categories of autonomy, agency, and privacy in the extension of capitalism across geographies, and to historicize the production of interiority within that same frame.

    Against the contemporary tendency toward effacing the interior life of families and individuals, Zuboff offers sanctuary as the right to protection from surveillance. In this moment, that protection needs thick walls. For Zuboff, those walls need to be built by young people–one gets the sense that she is speaking across these sections to her own children and those of her children’s generation. The problem with describing sanctuary in this way is that it narrows the scope for both understanding the stakes of surveillance and recognizing where the battles for control over data will be fought.

    As a broadside, Surveillance Capitalism works through a combination of rhetoric and evidence. Zuboff hopes that a younger generation will fight the watchers for control over their own data. Yet, by addressing largely a well-off, college-educated, and young audience, Zuboff restricts the people who are being asked to take up the cause, and fails to ask the difficult question of what it would take to build a house with thicker walls for everyone.

    A persistent concern while reading this book is whether its analysis can encompass otherwheres. The populations that are most at risk under surveillance capitalism include immigrants, minorities, and workers, both within and outside the United States. The framework of data exhaust and its use to predict and govern behavior does not quite illuminate the uses of data collection to track border crossers, “predict” crime, and monitor worker movements inside warehouses. These relationships require an analysis that can get at the overlap between corporate and government surveillance, which Surveillance Capitalism studiously avoids. The book begins with an analysis of a system of exploitation based on turning data into profits, and argues that the new mode of production makes the motor of capitalism shift from products to information, a point well established by previous literature. Given this analysis, it astonishing that the last section of the book returns to a defense of individual rights, without stopping to question whether the ‘hive’ forms of organization that Zuboff finds in the logics of surveillance capital may have been a cooptation of radical kinds of social organizing arranged against a different model of exploitation. Leaderless movements like Occupy should be considered fully when describing hives, along with contemporary initiatives like tech worker cooperatives and technical alternatives like local mesh networks. The possibility that these radical forms of social organization may be subject to cooptation by the actors Zuboff describes never appears in the book. Instead, Zuboff appears to mistranslate theories of the subject that locate agency above or below the level of the individual to political acquiescence to a program of total social control. Without taking the step considering the political potential in ‘hive-like’ social organization, Zuboff’s corrective falls back on notions of individual rights and protections and is unable to imagine a new kind of collective action that moves beyond both individualism and behaviorism. This failure, for instance, skews Zuboff’s arguments toward the familiar ground of data protection as a solution rather than toward the more radical stances of refusal, which question data collection in the first place.

    Zuboff’s world is flat. It is a world in which there are Big Others that suck up an undifferentiated public’s data, Others whose objective is to mold our behavior and steal our free will. In this version of flatness, what was once described positively is now described negatively, as if we had collectively turned a rosy-colored smooth world flat black. Yet, how collective is this experience? How will it play out if the solutions we provide rely on bracketing out the question of what kinds of people and communities are afforded the chance to build thicker walls? This calls forth a deeper issue than simply that of a lack of inclusion of other voices in Zuboff’s account. After all, perhaps fixing the surveillance issue through the kinds of rights to sanctuary that Zuboff suggests would also fix the issue for those who are not usually conceived of as mainstream consumers.

    Except, historical examples ranging from Simone Browne’s explication of surveillance and slavery in Dark Matters to Achille Mbembe’s articulation of necropolitcs teach us that consumer protection is a thin filament on which to hang protection for all from overweaning surveillance apparati–corporate or otherwise. One could easily imagine a world where the privacy rights of well-heeled Americans are protected, but those of others continue to be violated. To reference one pertinent example, companies who are banking on monetizing data through a contractual relationship where individuals sell the data that they themselves own are simultaneously banking on those who need to sell their data to make money. In other words, as legal scholar Stacy-Ann Elvy notes (2017), in a personal data economy low-income consumers will be incentivized to sell their data without much concern for the conditions of sale, even while those who are well-off will have the means to avoid these incentives, resulting in the illusion of individual control and uneven access to privacy determined by degrees of socioeconomic vulnerability. These individuals will also be exposed to a greater degree of risk that their information will not stay secure.

    Simone Browne demonstrates that what we understand as surveillance was developed on and through black bodies, and that these populations of slaves and ex-slaves have developed strategies of avoiding detection, which she calls dark sousveillance. As Browne notes, “routing the study of contemporary surveillance” through the histories of “black enslavement and captivity opens up the possibility for fugitive acts of escape” even while it shows that the normative surveillance of white bodies was built on long histories of experimentations with black bodies (Browne 2015, 164). Achille Mbembe’s scholarship on necropolitics was developed through the insight that some life becomes killable, or in Jasbir Puar’s (2017) memorable phrasing, maimable, at the same time that other life is propagated. Mbembe proposes “necropolitcs” to describe “death worlds” where “death” not life, “is the space where freedom and negotiation happen” where “vast populations are subjected to conditions of life conferring on them the status of living dead” (Mbembe 2003, 40). The right to sanctuary appears to short circuit the spaces where life has already been configured as available for expropriation through perpetual wounding. Crucial to both Browne and Mbembe’s arguments is the insight that the study of the uneven harms of surveillance concomitantly surfaces the tactics of opposition and the archives of the world that provide alternative models of refuge outside the contractual property relationship evoked across the pages of Surveillance Capitalism.

    All those considered outside the ambit of individualized rights, including those in territories marked by extrajudicial measures, those deemed illegal, those perennially under threat, those who while at work are unprotected, those in unseen workplaces, and those simply unable to exercise rights to privacy due to law or circumstance, have little place in Zuboff’s analysis. One only has to think of Kashmir, and the access that people with no ties to this place will now have to building houses there, to begin to grasp the contested politics of home-building.[2] Without an acknowledgement of the limits of both the critique of surveillance capitalism and the agents of its proposed solutions, it seems this otherwise promising book will reach the usual audiences and have the usual effect of shoring up some peoples’ and places’ rights even while making the rest of the world and its populations available for experiments in data appropriation.

    _____

    Sareeta Amrute is Associate Professor of Anthropology at the University of Washington. Her scholarship focuses on contemporary capitalism and ways of working, and particularly on the ways race and class are revisited and remade in sites of new economy work, such as coding and software economies. She is the author of the book Encoding Race, Encoding Class: Indian IT Workers in Berlin (Duke University Press, 2016) and recently published the article “Of Techno-Ethics and Techno-Affects” in Feminist Review.

    Back to the essay

    _____

    Notes

    [1] Friedman (2005) attributes this phrase to Nandan Nilekani, then Co-Chair, of Indian Tech company Infosys (and subsequently Chair of the Unique Identification Authority of India).

    [2] Until 2019, Articles 370 and 35A of the Indian Constitution granted the territories of Jammu and Kashmir special status, which allowed the state to keep on it’s books laws restricting who could buy land and property in Kashmir by allowing the territories to define who counted as a permanent resident.. After the abrogation of Article 370, rumors swirled that the rich from Delhi and elsewhere would now be able to purchase holiday homes in the area. See e.g. Devansh Sharma, “All You Need to Know about Buying Property in Jammu and Kashmir“; Parvaiz Bukhari, “Myth No 1 about Article 370: It Prevents Indians from Buying Land in Kashmir.”

    _____

    Works Cited

    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
    • Elvy, Stacy-Ann. 2017. “Paying for Privacy and the Personal Data Economy.” Columbia Law Review 117:6 (Oct). 1369-1460.
    • Friedman, Thomas. 2005. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus and Giroux.
    • Mbembe, Achille. 2003. “Necropolitics.” Public Culture 15:1 (Winter). 11-40.
    • Mbembe, Achille. 2019. Necropolitics. Durham, NC: Duke University Press.
    • Puar, Jasbir K. 2017. The Right to Maim: Debility, Capacity, Disability. Durham, NC: Duke University Press.

     

  • David Newhoff —  The Harms of Digital Tech and Tech Law (Review of Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls)

    David Newhoff — The Harms of Digital Tech and Tech Law (Review of Goldberg, Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls)

    a review of Carrie Goldberg (with Jeannine Amber), Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls (Plume, 2019)

    by David Newhoff

    ~

    During an exchange on my blog in 2014 with an individual named Anonymous—it must have been a very popular baby name at some point—I was told, “Yes, yes, David, show us on the doll where the Internet touched you, because we all know that all evil comes from there.”  That discussion was in context to the internet industry’s anti-copyright agenda, but the smugness of the response, lurking behind a concealed identity while making an eye-rolling allusion to sexual assault, is characteristic of the tech-bro culture that dismisses any conversation about the darker aspects of digital life.  In fact, I am fairly sure it was the same Anonymous who decided that I had “failed the free speech test” because I wrote encouragingly about the prospect of making the conduct generally referred to as “revenge porn” a federal crime.

    Those old exchanges, conducted in the safety of the abstract, came rushing into the foreground while I read attorney Carrie Goldberg’s Nobody’s Victim:  Fighting Psychos, Stalkers, Pervs, and Trolls, because Goldberg and her colleagues do not address conduct like “revenge porn” in the abstract: they deal with it as a tangible and terrifying reality.  It is at her Brooklyn law firm where the victims of that crime (and other forms of harassment and abuse) arrive shattered, frightened and suicidally desperate to escape the hell their lives have become—often with the push of a button.  These are people who can show us exactly how and where the “internet touched” them, and Goldberg’s book is a harrowing tutorial in the various ways online platforms provide opportunity, motive, sanctuary, and even profit for individuals who purposely choose to destroy other human beings.

    Nobody’s Victim reads like an anthology of short thriller/horror stories but for the fact that each of the terrorized protagonists is a real person, and far too many of them are children.  These infuriating anecdotes are interwoven with the story of Goldberg’s own transformation from a young woman nearly destroyed by predatory men to become, as she puts it, the attorney she needed when she was in trouble.  The result is both an inspiring narrative of personal triumph over adversity and a rigorous critique of our inadequate legal framework, which needlessly exacerbates the suffering of people targeted by life-threatening attacks—attacks that were simply not possible before the internet as we know it.

    Covering a lot of ground—from stalking to sextortion—Goldberg tells the stories of her archetypal clients, along with her own jaw-dropping experiences, in a voice that pairs the discipline of a lawyer with the passion of a crusader. “We can be the army to take these motherfuckers down,” her introduction concludes, and “What happened to you matters,” is the mantra of her epilogue.  It is clear that the central message she wants to convey is one of empowerment for the constituency she represents, but the details are chilling to say the least.

    Anyone anywhere can have his or her life torn apart by remote control—i.e. via the web.  All the malefactor really needs is basic computer skills, a little too much time on his hands, and a profoundly broken moral compass.  Psychos, stalkers, pervs, trolls, and assholes are all specific types of criminals in the “Carrie Goldberg Taxonomy of Offenders.”  For instance, the ex-boyfriend who uploads non-consensual intimate images to a revenge-porn site is a psycho, while the site operator, profiting off the misery of others, is an asshole.

    As Goldberg notes in Chapter 6, by the year 2014, there were about 3,000 websites dedicated to hosting revenge porn.  That is a hell of a lot of guys willing to expose their ex-girlfriends to a range of potential trauma—these include public humiliation, job loss, relationship damage, sexual assault, PTSD, and suicide—simply because their partner broke off the relationship.  This volume of men engaging in revenge porn does seem to imply that the existence of the technology itself becomes a motive or rationale for the conduct, but that is perhaps a subject to explore in another post.

    One theme that comes through loud and clear for me in Nobody’s Victim—particularly in context to the editorial scope of my blog—is that the individual conduct of the psychos, et al is only slightly less maddening than our systemic failure to protect the victims.  As a cyber-policy matter, that means the chronic misinterpretation of Section 230 of the Communications Decency Act as a speech-right protection and a blanket liability shield for online service providers.

    Taking on Section 230

    Goldberg’s most high-profile client, Matthew Herrick, was the target of a disgruntled ex-boyfriend named Juan Carlos Gutierrez, who tried, via the gay dating app Grindr, to get Herrick at least raped, if not murdered.  By creating several Grindr accounts designed to impersonate Herrick, Gutierrez posted invitations to seek him out for rough, “rape-fantasy” sex, including messages that any protests to stop should be taken as “part of the game.”  Hundreds of men swarmed into Herrick’s life for more than a year—appearing at his home and work, often becoming verbally or physically aggressive upon discovering that he was not offering what they were looking for.

    With Goldberg’s help, Herrick succeeded in getting Gutierrez convicted on felony charges, but what they could never obtain was even the most basic form of assistance from Grindr.  You might think it would be at least common courtesy for an internet business to remove accounts that falsely claim to be you—particularly when those accounts are being used to facilitate criminal threats to your safety and livelihood.  In fact, the smaller dating app Gutierrez had been using called Scruff eagerly and sympathetically complied with Herrick’s plea for help.  But Grindr told him to fuck off by saying, “There’s nothing we can do.”

    Herrick, through Goldberg, sued Grindr for “negligence, deceptive business practices and false advertising, intentional and negligent infliction of emotional distress, failure to warn, and negligent misrepresentation.”  They lost in both the District Court and in the Second Circuit Court of Appeals, principally because most courts continue to read Section 230 of the CDA as absolute immunity for online service providers.  This cognitive dissonance, which chooses to ignore the fact that a matter like Herrick’s plight is wholly unrelated to free speech, is emphasized in an amicus brief that the Electronic Frontier Foundation (EFF) filed in the Second Circuit appeal on behalf of Grindr:

    Intermediaries allow Internet users to connect easily with family and friends, follow the news, share opinions and personal experiences, create and share art, and debate politics. Appellant’s efforts to circumvent Section 230’s protections undermine Congress’s goal of encouraging open platforms and robust online speech.

    Isn’t that pretty?  But what the fuck has any of it got to do with using internet technologies to impersonate someone; to commit libel, slander, or defamation in his/her name; to deploy violent people (or in some cases SWAT teams) against a private individual; or to get someone fired or arrested—and all for the perpetrator’s amusement, vengeance, or profit?  None of that conduct is remotely protected by the speech right, and all of it—all of it—infringes the speech rights and other civil liberties of the victims.  Perhaps most absurdly, organizations like EFF choose to overlook the fact that the first right being denied to someone in Herrick’s predicament is the right to safely access all those invaluable activities enabled by online “intermediaries.”

    No, Grindr did not commit those crimes, but let’s be real.  What was Herrick asking Grindr to do?  Remove the conduits through which crimes were being committed against him—online accounts pretending to be him.  Scruff complied, and I didn’t feel a tremor in the free speech right, did you?   If we truly cannot make a legal distinction between Herrick’s circumstances and all that frilly bullshit the EFF likes to repeat ad nauseum, then, we are clearly too stupid to reap the benefits of the internet while mitigating its harms.

    Suffice to say, a fight over Section 230 is indeed brewing.  As it heats up, Silicon Valley will marshal its seemingly endless resources to defend the status quo, and they will carpet bomb the public with messages that any change to this law will be an existential threat to the internet as we know it.  There is some truth to that, of course, but the internet as we know it needs a lot of work.  Meanwhile, if anyone is going to win against Big Tech’s juggernaut on this issue, it will be thanks to the leadership of (mostly) women like Carrie Goldberg, her colleagues, and her clients.

    It is an unfortunate axiom that policy rarely changes without some constituency suffering harm for a period of time; and those are exactly the people whose stories Goldberg is in a position to tell—in court, in Congress, and to the public.  If you read Nobody’s Victim and still insist, like my friend Anonymous, this is all a theoretical debate about anomalous cases, largely mooted by the speech right, there’s a pretty good chance you’re an asshole—if not a psycho, stalker, perv, or troll.  And that clock you hear ticking is actually the sound of Carrie Goldberg’s signature high heels heading your way.

    _____

    David Newhoff is a filmmaker, writer, and communications consultant, and an activist for artist’s rights, especially as they pertain to the erosion of copyright by digital technology companies. He is writing a book about copyright due out in Fall 2020. He writes about these issue frequently as @illusionofmore on Twitter and on the blog The Illusion of More, on which an earlier version of this review first appeared.

    Back to the essay

  • Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Audrey Watters

    ~

    The future of education is technological. Necessarily so.

    Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.

    To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.

    Or so the story goes.

    Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.

    And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.

    We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”

    Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.

    Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.

    And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).

    Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.

    Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning – Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)

    Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.

    As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:

    the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.

    As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalism as Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.

    Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)

    As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.

    And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.

    “I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.

    Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines, forthcoming from The MIT Press. She maintains the widely-read Hack Education blog, on which earlier version of this piece first appeared. and writes frequently for The b2o Review Digital Studies section on digital technology and education.

    Back to the essay

  • Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    Zachary Loeb — Hashtags Lean to the Right (Review of Schradie, The Revolution that Wasn’t: How Digital Activism Favors Conservatives)

    a review of Jen Schradie,The Revolution that Wasn’t: How Digital Activism Favors Conservatives (Harvard University Press, 2019)

    by Zachary Loeb

    ~

    Despite the oft-repeated, and rather questionable, trope that social media is biased against conservatives; and beyond the attention that has been lavished on tech-savvy left-aligned movements (such as Occupy!) in recent years—this does not necessarily mean that social media is of greater use to the left. It may be quite the opposite. This is a topic that documentary filmmaker, activist and sociologist Jen Schradie explores in depth in her excellent and important book The Revolution That Wasn’t: How Digital Activism Favors Conservatism. Engaging with the political objectives of activists on the left and the right, Schradie’s book considers the political values that are reified in the technical systems themselves and the ways in which those values more closely align with the aims of conservative groups. Furthermore, Schradie emphasizes the socio-economic factors that allow particular groups to successfully harness high-tech tools, thereby demonstrating how digital activism reinforces the power of those who are already enjoying a fair amount of power. Rather than suggesting that high-tech tools have somehow been stolen from the left by the right, The Revolution That Wasn’t argues that these were not the left’s tools in the first place.

    The background against which Schradie’s analysis unfolds is the state of North Carolina in the years after 2011. Generally seen as a “red state,” North Carolina had flipped blue for Barack Obama in 2008, leading to the state being increasingly seen as a battleground. Even though the state was starting to take on a purplish color, North Carolina was still home to a deeply entrenched conservativism that was reflected (and still is reflected) in many aspects of the state’s laws, and in the legacy of racist segregation that is still felt in the state. Though the Occupy! movement lingers in the background of Schradie’s account, her focus is on struggles in North Carolina around unionization, the rapid growth of the Tea Party, and the emergence of the “Moral Monday” movement which inspired protests across the state (starting in 2013). While many considerations of digital activism have focused on hip young activists festooned with piercings, hacker skills, and copies of The Coming Insurrection—the central characters of Schradie’s book are members of the labor movement, campus activists, Tea Party members, Preppers, people associated with “Patriot” groups, as well as a smattering of paid organizers working for large organizations. And though Schradie is closely attuned to the impact that financial resources have within activist movements, she pushes back against the “astroturf” accusation that is sometimes aimed at right-wing activists, arguing that the groups she observed on both the right and the left reflected genuine populist movements.

    There is a great deal of specificity to Schradie’s study, and many of the things that Schradie observes are particular to the context of North Carolina, but the broader lessons regarding political ideology and activism are widely applicable. In looking at the political landscape in North Carolina, Schradie carefully observes the various groups that were active around the unionization issue, and pays close attention to the ways in which digital tools were used in these groups’ activism. The levels of digital savviness vary across the political groups, and most of the groups demonstrate at least some engagement with digital tools; however, some groups embraced the affordances of digital tools to a much greater extent than others. And where Schradie’s book makes its essential intervention is not simply in showing these differing levels of digital use, but in explaining why. For one of the core observations of Schradie’s account of North Carolina, is that it was not the left-leaning groups, but the right-leaning groups who were able to make the most out of digital tools. It’s a point which, to a large degree, runs counter to general narratives on the left (and possibly also the right) about digital activism.

    In considering digital activism in North Carolina, Schradie highlights the “uneven digital terrain that largely abandoned left working-class groups while placing right-wing reformist groups at the forefront of digital activism” (Schradie, 7). In mapping out this terrain, Schradie emphasizes three factors that were pivotal in tilting this ground, namely class, organization, and ideology. Taken independently of one another, each of these three factors provides valuable insight into the challenges posed by digital activism, but taken together they allow for a clear assessment of the ways that digital activism (and digital tools themselves) favor conservatives. It is an analysis that requires some careful wading into definitions (the different ways that right and left groups define things like “freedom” really matters), but these three factors make it clear that “rather than offering a quick technological fix to repair our broken democracy, the advent of digital activism has simply ended up reproducing, and in some cases, intensifying, preexisting power imbalances” (Schradie, 7).

    Considering that the core campaign revolves around unionization, it should not particularly be a surprise that class is a major issue in Schradie’s analysis. Digital evangelists have frequently suggested that high-tech tools allow for the swift breaking down of class barriers by providing powerful tools (and informational access) to more and more people—but the North Carolinian case demonstrates the ways in which class endures. Much of this has to do with the persistence of the digital divide, something which can easily be overlooked by onlookers (and academics) who have grown accustomed to digital tools. Schradie points to the presence of “four constraints” that have a pivotal impact on the class aspect of digital activism: “Access, Skills, Empowerment, and Time” (or ASETs for short; Schradie, 61). “Access” points to the most widely understood part of the digital divide, the way in which some people simply do not have a reliable and routine way of getting ahold of and/or using digital tools—it’s hard to build a strong movement online, when many of your members have trouble getting online. This in turn reverberates with “Skills,” as those who have less access to digital tools often lack the know-how that develops from using those tools—not everyone knows how to craft a Facebook post, or how best to make use of hashtags on Twitter. While digital tools have often been praised precisely for the ways in which they empower users, this empowerment is often not felt by those lacking access and skills, leading many individuals from working-class groups to see “digital activism as something ‘other people’ do” (Schradie, 64). And though it may be the easiest factor to overlook, engaging in digital activism requires Time, something which is harder to come by for individuals working multiple jobs (especially of the sort with bosses that do not want to see any workers using phones at work).

    When placed against the class backgrounds of the various activist groups considered in the book, the ASETs framework clearly sets up a situation in which conservative activists had the advantage. What Schradie found was “not just a question of the old catching up with the young, but of the poor never being able to catch up with the rich” (Schradie, 79), as the more financially secure conservative activists simply had more ASETs than the working-class activists on the left. And though the right-wing activists skewed older than the left-wing activists, they proved quite capable of learning to use new high-tech tools. Furthermore, an extremely important aspect here is that the working-class activists (given their economic precariousness) had more to lose from engaging in digital activism—the conservative retiree will be much less worried about losing their job, than the garbage truck driver interested in unionizing.

    Though the ASETs echo throughout the entirety of Schradie’s account, “Time” plays an essential connective role in the shift from matters of class to matters of organization. Contrary to the way in which the Internet has often been praised for invigorating horizontal movements (such as Occupy!), the activist groups in North Carolina attest to the ways in which old bureaucratic and infrastructural tools are still essential. Or, to put it another way, if the various ASETs are viewed as resources, then having a sufficient quantity of all four is key to maintaining an organization. This meant that groups with hierarchical structures, clear divisions of labor, and more staff (be these committed volunteers or paid workers) were better equipped to exploit the affordances of digital tools.

    Importantly, this was not entirely one-sided. Tea Party groups were able to tap into funding and training from larger networks of right-wing organizations, but national unions and civil rights organizations were also able to support left-wing groups. In terms of organization, the overwhelming bias is less pronounced in terms of a right/left dichotomy and more a reflection of a clash between reformist/radical groups. When it came to organization the bias was towards “reformist” groups (right and left) that replicated present power structures and worked within the already existing social systems; the groups that lose out here tend to be the ones that more fully eschew hierarchy (an example of this being student activists). Though digital democracy can still be “participatory, pluralist, and personalized,” Schradie’s analysis demonstrates how “the internet over the long-term favored centralized activism over connective action; hierarchy over horizontalism; bureaucratic positions over networked persons” (Schradie, 134). Thus, the importance of organization, demonstrates not how digital tools allowed for a new “participatory democracy” but rather how standard hierarchical techniques continue to be key for groups wanting to participate in democracy.

    Beyond class and organization (insofar as it is truly possible to get past either), the ideology of activists on the left and activists on the right has a profound influence on how these groups use digital tools. For it isn’t the case that the left and the right try to use the Internet for the exact same purpose. Schradie captures this as a difference between pursuing fairness (the left), and freedom (the right)—this largely consisted of left-wing groups seeking a “fairer” allocation of societal power, while those on the right defined “freedom” largely in terms of protecting the allocation of power already enjoyed by these conservative activists. Believing that they had been shut out by the “liberal media,” many conservatives flocked to and celebrated digital tools as a way of getting out “the Truth,” their “digital practices were unequivocally focused on information” (Schradie, 167). As a way of disseminating information, to other people already in possession of ASETs, digital means provided right-wing activists with powerful tools for getting around traditional media gatekeepers. While activists on the left certainly used digital tools for spreading information, their use of the internet tended to be focused more heavily on organizing: on bringing people together in order to advocate for change. Further complicating things for the left is that Schradie found there to be less unity amongst leftist groups in contrast to the relative hegemony found on the right. Comparing the intersection of ideological agendas with digital tools, Schradie is forthright in stating, “the internet was simply more useful to conservatives who could broadcast propaganda and less effective for progressives who wanted to organize people” (Schradie, 223).

    Much of the way that digital activism has been discussed by the press, and by academics, has advanced a narrative that frames digital activism as enhancing participatory democracy. In these standard tales (which often ground themselves in accounts of the origins of the internet that place heavy emphasis on the counterculture), the heroes of digital activism are usually young leftists. Yet, as Schradie argues, “to fully explain digital activism in this era, we need to take off our digital-tinted glasses” (Schradie, 259). Removing such glasses reveals the way in which they have too often focused attention on the spectacular efforts of some movements, while overlooking the steady work of others—thus, driving more attention to groups like Occupy!, than to the buildup of right-wing groups. And looking at the state of digital activism through clearer eyes reveals many aspects of digital life that are obvious, yet which are continually forgotten, such as the fact that “the internet is a tool that favors people with more money and power, often leaving those without resources in the dust” (Schradie, 269). The example of North Carolina shows that groups on the left and the right are all making use of the Internet, but it is not just a matter of some groups having more ASETs, it is also the fact that the high-tech tools of digital activism favor certain types of values and aims better than others. And, as Schradie argues throughout her book, those tend to be the causes and aims of conservative activists.

    Despite the revolutionary veneer with which the Internet has frequently been painted, “the reality is that throughout history, communications tools that seemed to offer new voices are eventually owned or controlled by those with more resources. They eventually are used to consolidate power, rather than to smash it into pieces and redistribute it” (Schradie, 25). The question with which activists, particularly those on the left, need to wrestle is not just whether or not the Internet is living up to its emancipatory potential—but whether or not it ever really had that potential in the first place.

    * * *

    In an iconic photograph from 1948, a jubilant Harry S. Truman holds aloft a copy of The Chicago Daily Tribune emblazoned with the headline “Dewey Beats Truman.” Despite the polls having predicted that Dewey would be victorious, when the votes were counted Truman had been sent back to the White House and the Democrats took control of the House and the Senate. An echo of this moment occurred some sixty-eight years later, though there was no comparable photo of Donald Trump smirking while holding up a newspaper carrying the headline “Clinton Beats Trump.” In the aftermath of Trump’s victory pundits ate crow in a daze, pollsters sought to defend their own credibility by emphasizing that their models had never actually said that there was no chance of a Trump victory, and even some in Trump’s circle seemed stunned by his victory.

    As shock turned to resignation, the search for explanations and scapegoats began in earnest. Democrats blamed Russian hackers, voter suppression, the media’s obsession with Trump, left-wing voters who didn’t fall in line, and James Comey; while Republicans claimed that the shock was simply proof that the media was out of touch with the voters. Yet, Republicans and Democrats seemed to at least agree on one thing: to understand Trump’s victory, it was necessary to think about social media. Granted, Republicans and Democrats were divided on whether this was a matter of giving credit or assigning blame. On the one hand, Trump had been able to effectively use Twitter to directly engage with his fan base; on the other hand, platforms like Facebook had been flooded with disinformation that spread rapidly through the online ecosystem. It did not take long for representatives, including executives, from the various social media companies to find themselves called before Congress, where these figures were alternately grilled about supposed bias against conservatives on their platforms, and taken to task for how their platforms had been so easily manipulated into helping Trump win election.

    If the tech companies were only finding themselves summoned before Congress it would have been bad enough, but they were also facing frustrated employees, as well as disgruntled users, and the word “techlash” was being used to describe the wave of mounting frustration with these companies. Certainly, unease with the power and influence of the tech titans had been growing for years. Cambridge Analytica was hardly the first tech scandal. Yet much of that earlier displeasure was tempered by an overwhelmingly optimistic attitude towards the tech giants, as though the industry’s problematic excesses were indicative of growing pains as opposed to being signs of intrinsic anti-democratic (small d) biases. There were many critics of the tech industry before the arrival of the “techlash,” but they were liable to find themselves denounced as Luddites if they failed to show sufficient fealty to the tech companies. From company CEOs to an adoring tech press to numerous technophilic academics, in the years prior to the 2016 election smart phones and social media were hailed for their liberating and democratizing potential. Videos shot on smart phone cameras and uploaded to YouTube, political gatherings organized on Facebook, activist campaigns turning into mass movements thanks to hashtags—all had been treated as proof positive that high tech tools were breaking apart the old hierarchies and ushering in a new era of high-tech horizontal politics.

    Alas, the 2016 election was the rock against which many of these high-tech hopes crashed.

    And though there are many strands contributing to the “techlash,” it is hard to make sense of this reaction without seeing it in relation to Trump’s victory. Users of Facebook and Twitter had been frustrated with those platforms before, but at the core of the “techlash” has been a certain sense of betrayal. How could Facebook have done this? Why was Twitter allowing Trump to break its own terms of service on a daily basis? Why was Microsoft partnering with ICE? How come YouTube’s recommendation algorithms always seemed to suggest far-right content?

    To state it plainly: it wasn’t supposed to be this way.

    But what if it was? And what if it had always been?

    In a 1985 interview with MIT’s newspaper The Tech, the computer scientist and social critic, Joseph Weizenbaum had some blunt words about the ways in which computers had impacted society, telling his interviewer: “I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed” (ben-Aaron, 1985). This was not a new position for Weizenbaum; he had largely articulated the same idea in his 1976 book Computer Power and Human Reason, wherein he had pushed back at those he termed the “artificial intelligentsia” and the other digital evangelists of his day. Articulating his thoughts to the interviewer from The Tech, Weizenbaum raised further concerns about the close links between the military and computer work at MIT, and cast doubt on the real usefulness of computers for society—couching his dire fears in the social critic’s common defense “I hope I’m wrong” (ben-Aaron, 1985). Alas, as the decades passed, Weizenbaum unfortunately felt he had been right. When he turned his critical gaze to the internet in a 2006 interview, he decried the “flood of disinformation,” while noting “it just isn’t true that everyone has access to the so-called Information age” (Weizenbaum and Wendt 2015, 44-45).

    Weizenbaum was hardly the only critic to have looked askance at the growing importance that was placed on computers during the 20th century. Indeed, Weizenbaum’s work was heavily influenced by that of his friend and fellow social critic Lewis Mumford who had gone so far as to identify the computer as the prototypical example of “authoritarian” technology (even suggesting that it was the rebirth of the “sun god” in technical form). Yet, societies that are in love with their high-tech gadgets, and which often consider technological progress and societal progress to be synonymous, generally have rather little time for such critics. When times are good, such social critics are safely quarantined to the fringes of academic discourse (and completely ignored within broader society), but when things get rocky they have their woebegone revenge by being proven right.

    All of which is to say, that thinkers like Weizenbaum and Mumford would almost certainly agree with The Revolution That Wasn’t. However, they would probably not be surprised by it. After all, The Revolution That Wasn’t is a confirmation that we are today living in the world about which previous generations of critics warned. Indeed, if there is one criticism to be made of Schradie’s work, it is that the book could have benefited by more deeply grounding its analysis in the longstanding critiques of technology that have been made by the likes of Weizenbaum, Mumford, and quite a few other scholars and critics. Jo Freeman and Langdon Winner are both mentioned, but it’s important to emphasize that many social critics warned about the conservative biases of computers long before Trump got a Twitter account, and long before Mark Zuckerberg was born. Our widespread refusal to heed these warnings, and the tendency to mock those issuing these warnings as Luddites, technophobes, and prophets of doom, is arguably a fundamental cause of the present state of affairs which Schradie so aptly describes.

    With The Revolution That Wasn’t, Jen Schradie has made a vital intervention in current discussions (inside the academy and amongst activists) regarding the politics of social media. Eschewing a polemical tone, which refuses to sing the praises of social media or to condemn it outright, Schradie provides a measured assessment that addresses the way in which social media is actually being used by activists of varying political stripes—with a careful emphasis on the successes these groups have enjoyed. There is a certain extent to which Schradie’s argument, and some of her conclusions, represent a jarring contrast to much of the literature that has framed social media as being a particular boon to left-wing activists. Yet, Schradie’s book highlights with disarming detail the ways in which a desire (on the part of left-leaning individuals) to believe that the Internet favors people on the left has been a sort of ideological blinder that has prevented them from fully coming to terms with how the Internet has re-entrenched the dominant powers in society.

    What Schradie’s book reveals is that “the internet did not wipe out barriers to activism; it just reflected them, and even at times exacerbated existing power differences” (Schradie, 245). Schradie allows the activists on both sides to speak in their own words, taking seriously their claims about what they were doing. And while the book is closely anchored in the context of a particular struggle in North Carolina, the analytical tools that Schradie develops (such as the ASET framework, and the tripartite emphasis on class/organization/ideology) allow Schradie’s conclusions to be mapped onto other social movements and struggles.

    While the research that went into The Revolution That Wasn’t clearly predates the election of Donald Trump, and though he is not a main character in the book, the 45th president lurks in the background of the book (or perhaps just in the reader’s mind). Had Trump lost the election, every part of Schradie’s analysis would be just as accurate and biting; however, those seeking to defend social media tools as inherently liberating would probably not be finding themselves on the defensive today (a position that most of them were never expecting themselves to be in). Yet, what makes Schradie’s account so important, is that the book is not simply concerned with whether or not particular movements used digital tools; rather, Schradie is able to step back to consider the degree to which the use of social media tools has been effective in fulfilling the political aims of the various groups. Yes, Occupy! might have made canny use of hashtags (and, if one wants to be generous one can say that it helped inject the discussion of inequality back into American politics), but nearly ten years later the wealth gap is continuing to grow. For all of the hopeful luster that has often surrounded digital tools, Schradie’s book shows the way in which these tools have just placed a fresh coat of paint on the same old status quo—even if this coat of paint is shiny and silvery.

    As the technophiles scramble to rescue the belief that the Internet is inherently democratizing, The Revolution That Wasn’t takes its place amongst a growing body of critical works that are willing to challenge the utopian aura that has been built up around the Internet. While it must be emphasized, as the earlier allusion to Weizenbaum shows, that there have been thinkers criticizing computers and the Internet for as long as there have been computers and the Internet—of late there has been an important expansion of such critical works. There is not the space here to offer an exhaustive account of all of the critical scholarship being conducted, but it is worthwhile to mention some exemplary recent works. Safiya Umoja Noble’s Algorithms of Oppression provides an essential examination of the ways in which societal biases, particularly about race and gender, are reinforced by search engines. The recent work on the “New Jim Code” by Ruha Benjamin as seen in such works as Race After Technology, and the Captivating Technology volume she edited, foreground the ways in which technological systems reinforce white supremacy. The work of Virginia Eubanks, both Digital Dead End (whose concerns make it likely the most important precursor to Schradie’s book) and her more recent Automating Inequality, discuss the ways in which high tech systems are used to police and control the impoverished. Examinations of e-waste (such as Jennifer Gabry’s Digital Rubbish) and infrastructure (such as Nicole Starosielski’s The Undersea Network, and Tung-Hui Hu’s A Prehistory of the Cloud) point to the ways in which colonial legacies are still very much alive in today’s high tech systems. While the internationalist sheen that is often ascribed to digital media is carefully deconstructed in works like Ramesh Srnivasan’s Whose Global Village? Works like Meredith Broussard’s Artificial Unintelligence and Shoshana Zuboff’s Age of Surveillance Capitalism raise deep questions about the overall politics of digital technology. And, with its deep analysis of the way that race and class are intertwined with digital access and digital activism, The Revolution That Wasn’t deserves a place amongst such works.

    What much of this recent scholarship has emphasized is that technology is never neutral. And while this may be a point which is accepted wisdom amongst scholars in these relevant fields, these works (and scholars) have taken great care to make this point to the broader public. It is not just that tools can be used for good, or for bad—but that tools have particular biases built into them. Pretending those biases aren’t there, doesn’t make them go away. Kranzberg’s Laws asserted that technology is not good, or bad, or neutral—but when one moves away from talking about technology to particular technologies, it is quite important to be able to say that certain technologies may actually be bad. This is a particular problem when one wants to consider things like activism. There has always been something asinine to the tactic of mocking activists pushing for social change while using devices created by massive multinational corporations (as the well-known comic by Matt Bors notes); however, the reason that this mockery is so often repeated is that it has a kernel of troubling truth to it. After all, there is something a little discomforting about using a device running on minerals mined in horrendous conditions, which was assembled in a sweatshop, and which will one day go on to be poisonous e-waste—for organizing a union drive.

    Matt Bors, detail from "Mister Gotcha" (2016)
    Matt Bors, detail from “Mister Gotcha” (2016)

    Or, to put it slightly differently, when we think about the democratizing potential of technology, to what extent are we privileging those who get to use (and discard) these devices, over those whose labor goes into producing them? That activists may believe that they are using a given device or platform for “good” purposes, does not mean that the device itself is actually good. And this is a tension Schradie gets at when she observes that “instead of a revolutionary participatory tool, the internet just happened to be the dominant communication tool at the time of my research and simply became normalized into the groups’ organizing repertoire” (Schradie, 133). Of course, activists (of varying political stripes) are making use of the communication tools that are available to them and widely used in society. But just because activists use a particular communication tool, doesn’t mean that they should fall in love with it.

    This is not in any way to call activists using these tools hypocritical, but it is a further reminder of the ways in which high-tech tools inscribe their users within the very systems they may be seeking to change. And this is certainly a problem that Schradie’s book raises, as she notes that one of the reasons conservative values get a bump from digital tools is that these conservatives are generally already the happy beneficiaries of the systems that created these tools. Scholarship on digital activism has considered the ideologies of various technologically engaged groups before, and there have been many strong works produced on hackers and open source activists, but often the emphasis has been placed on the ideologies of the activists without enough consideration being given to the ways in which the technical tools themselves embody certain political values (an excellent example of a work that truly considers activists picking their tools based on the values of those tools is Christina Dunbar-Hester’s Low Power to the People). Schradie’s focus on ideology is particularly useful here, as it helps to draw attention to the way in which various groups’ ideologies map onto or come into conflict with the ideologies that these technical systems already embody. What makes Schradie’s book so important is not just its account of how activists use technologies, but its recognition that these technologies are also inherently political.

    Yet the thorny question that undergirds much of the present discourse around computers and digital tools remains “what do we do if, instead of democratizing society, these tools are doing just the opposite?” And this question just becomes tougher the further down you go: if the problem is just Facebook, you can pose solutions such as regulation and breaking it up; however, if the problem is that digital society rests on a foundation of violent extraction, insatiable lust for energy, and rampant surveillance, solutions are less easily available. People have become so accustomed to thinking that these technologies are fundamentally democratic that they are loathe to believe analyses, such as Mumford’s, that they are instead authoritarian by nature.

    While reports of a “techlash” may be overstated, it is clear that at the present moment it is permissible to be a bit more critical of particular technologies and the tech giants. However, there is still a fair amount of hesitance about going so far as to suggest that maybe there’s just something inherently problematic about computers and the Internet. After decades of being told that the Internet is emancipatory, many people remain committed to this belief, even in the face of mounting evidence to the contrary. Trump’s election may have placed some significant cracks in the dominant faith in these digital devices, but suggesting that the problem goes deeper than Facebook or Amazon is still treated as heretical. Nevertheless, it is a matter that is becoming harder and harder to avoid. For it is increasingly clear that it is not a matter of whether or not these devices can be used for this or that political cause, but of the overarching politics of these devices themselves. It is not just that digital activism favors conservatism, but as Weizenbaum observed decades ago, that “the computer has from the beginning been a fundamentally conservative force.”

    With The Revolution That Wasn’t, Jen Schradie has written an essential contribution to current conversations around not only the use of technology for political purposes, but also about the politics of technology. As an account of left-wing and right-wing activists, Schradie’s book is a worthwhile consideration of the ways that various activists use these tools. Yet where this, altogether excellent, work really stands out is in the ways in which it highlights the politics that are embedded and reified by high-tech tools. Schradie is certainly not suggesting that activists abandon their devices—in so far as these are the dominant communication tools at present, activists have little choice but to use them—but this book puts forth a nuanced argument about the need for activists to really think critically about whether they’re using digital tools, or whether the digital tools are using them.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

    _____

    Works Cited

    • ben-Aaron, Diana. 1985. “Weizenbaum Examines Computers and Society.” The Tech (Apr 9).
    • Weizenbaum, Joseph, and Gunna Wendt. 2015. Islands in the Cyberstream: Seeking Havens of Reason in a Programmed Society. Duluth, MN: Litwin Books.
  • “Dennis Erasmus” — Containment Breach: 4chan’s /pol/ and the Failed Logic of “Safe Spaces” for Far-Right Ideology

    “Dennis Erasmus” — Containment Breach: 4chan’s /pol/ and the Failed Logic of “Safe Spaces” for Far-Right Ideology

    “Dennis Erasmus”

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Author’s Note: This article was written prior to the events of the deadly far-right riot in Charlottesville, Virginia, on August 11-12, 2017. Footnotes have been added with updated information where it is possible or necessary, but it has otherwise been largely unchanged.

    Introduction

    This piece is a discussion of one place on the internet where the far right meets, formulates their propaganda and campaigns, and ultimately reproduces and refines its ideology.

    4chan’s Politically Incorrect image board (like other 4chan boards, regularly referred to by the last portion of its URL, “/pol/”) is one of the most popular boards on the highly active and gently-moderated website, as well as a major online hub for far-right politics, memes, and coordinated harassment campaigns. Unlike most of the hobby-oriented boards on 4chan, /pol/ came into its current form through a series of board deletions and restorations with the intent of improving the discourse of the hobby boards by restricting unrelated political discussion, often of a bigoted nature, to a single location on the website. /pol/ is thus often referred to as a “containment board” with the understanding that far-right content is meant to be kept in that single forum.

    After deleting the /new/ – News board on January 17, 2011, /pol/ – Politically Incorrect was added to the website on November 10, 2011. 4chan’s original owner (and current Google employee) Christopher Poole (alias “moot”) deleted /new/ for having a disproportionately high proportion of racist discussion. In Poole’s words:

    As for /new/, anybody who used it knows exactly why it was removed. When I re-added the board last year, I made a note that if it devolved into /stormfront/, I’d remove it. It did — ages ago. Now it’s gone, as promised.[1]

    “/stormfront/” is a reference to Stormfront.org, one of the oldest and largest white supremacist forums on the internet. Stormfront was founded by a former KKK leader and is listed as an extremist group by the Southern Poverty Law Center (Southern Poverty Law Center 2017c).

    Despite once showing this commitment to maintaining a news board that was not dominated by far-right content, /pol/ nevertheless followed suit and gained a reputation as a haven for white supremacist politics (Dewey 2014).

    While there was the intention to keep political discussion contained in /pol/, far-right politics is a frequent theme on the other major discussion boards on the website and has come to be strongly associated with 4chan in general.

    The Logic of Containment

    The nature of 4chan means that for every new thread created, an old thread “falls off” of the website and is deleted or archived. Because of its high worldwide popularity and the fast pace of discussion, it has sometimes been viewed as necessary to split up boards into specific topics so that the rate of thread creation does not prematurely end productive, on-topic, ongoing conversations.

    The most significant example of a topic requiring “containment” is perhaps My Little Pony. The premiere of the 2010 animated series My Little Pony: Friendship is Magic led to a surge of interest in the franchise and a major fan following composed largely of young adult males (covered extensively in the media as “bronies”), 4chan’s key demographic (Whatisabrony.com 2017).

    Posters who wished to discuss other cartoons on the /co/ – Comics and Cartoons board were often left feeling crowded out by the intense and rapid pace of the large and excited fanbase that was only interested in discussing ponies. After months of complaints, a new board, /mlp/ – My Little Pony, was opened to accommodate both fans and detractors by giving the franchise a dedicated platform for discussion. For the most part, fans have been happy to stay and discuss the series among one another. There is also a site-wide rule that pony-related discussion must be confined in /mlp/, and while enforcement of the rules of 4chan is notoriously lax, this has mostly been applied (4chan 2017).

    A similar approach has been taken for several other popular hobbies; for instance, the creation of /vp/ – Pokémon for all media—be it video games, comics, or television—related to the very popular Japanese franchise.

    A common opinion on 4chan is that /pol/ serves as a “containment board” for the neo-Nazi, racist, and other far-right interests of many who use the website (Anonymous /q/ poster 2012). Someone who posts a blatantly political message on the /tv/ – Television and Film board, for instance, may be told “go back to your containment board.” One could argue, as well, that the popular and rarely moderated /b/ – Random board was originally a “containment board” for all of the off-topic discussion that would otherwise have derailed the specific niche or hobby boards.

    Moderators as Humans

    Jay Irwin, a moderator of 4chan and an advertising technology professional, wrote an article for The Observer.[2] The piece was published April 25, 2017, arguing that unwelcome “liberal agenda” in entertainment was serving to inspire greater conservatism on 4chan’s traditionally apolitical boards. Generalizations about the nature of 4chan’s userbase can be difficult, but Irwin’s status as a moderator means he has the ability to remove certain discussion threads while allowing others to flourish, shaping the discourse and apparent consensus of the website’s users.

    Irwin’s writing in The Observer shows a clear personal distaste for what he perceives as a liberal political agenda: in this specific case, Bill Nye’s assertion, backed up by today’s scientific consensus regarding human biology, that gender is a spectrum and not a binary:

    The show shuns any scientific approach to these topics, despite selling itself—and Bill Nye—as rigorously reason-based. Rather than providing evidence for the multitude of claims made on the show by Nye and his guests, the series relies on the kind of appeals to emotion one would expect in a gender studies class…The response on /tv/ was swift. The most historically apolitical 4channers are almost unanimously and vehemently opposed to the liberal agenda and lack of science on display in what is billed as a science talk show. Scores of 4chan users who have always avoided and discouraged political conversations have expressed horror at what they see as a significant uptick in the entertainment industry’s attempts to indoctrinate viewers with leftist ideology. (Irwin 2017)

    As Irwin believes the users of /tv/ are becoming less tolerant of liberal media, he expects them to also become warmer to far-right ideas and discussions that they once would have dismissed as off-topic and out of place on a television and film discussion board. Whether or not this is true of the /tv/ userbase, his obvious bias in favor of these ideas is able to inform the moderation that is applied when determining just how “off-topic” an anti-liberal thread might be.

    On the other end of the spectrum, a 4chan moderator was previously removed from the moderation team after issuing a warning against a user with explicitly political reasoning. In the aftermath of the December 2, 2016 fatal fire at the Ghost Ship warehouse, an artist’s space and venue in Oakland, California that killed thirty-six people, users of /pol/ attempted to organize a campaign to shut down DIY (“Do-it-yourself”) spaces across the United States by reporting noncompliance with fire codes to local authorities, in order to “crush the radical left” (KnowYourMeme 2017). As another moderator confirmed in a thread on /qa/, the board designed for discussions about 4chan, the fired moderator clearly stated their belief that the campaign to shut down DIY spaces is an attack on marginalized communities by neo-Nazis. (Anonymous##Mod 2016).

    The anti-DIY campaign is a clear example of the kind of “brigading”—use of /pol/ as an organizational and propaganda hub for right-wing political activities on other sites or in real life—that regularly occurs on the mostly-anonymous imageboard. The fired moderator’s error was not having an political agenda—as Irwin’s writing in The Observer demonstrates, he has an agenda of his own—but expressing it directly. They could have done as Irwin has the capacity to do, selectively deleting threads not to their liking with no justification required, so as to continue to maintain a facade of neutrality that is so important for the financially struggling site’s brand.

    He Will Not Divide Us

    Another such example of brigading activities would be the harassment surrounding the art project “He Will Not Divide Us” (HWNDU) by Shia LaBeouf, Nastja Säde Rönkkö & Luke Turner. Launched during the inauguration of President Trump on January 20, 2017, the project was to broadcast a 24-hour live stream for four years from outside of the Museum of the Moving Image in New York City. LaBeouf was frequently at the location leading crowds in relatively inoffensive chants: “he will not divide us,” and the like.

    LaBeouf, Rönkkö & Turner, HE WILL NOT DIVIDE US (2017)
    LaBeouf, Rönkkö & Turner, HE WILL NOT DIVIDE US (2017). Image source: Nylon

    Within a day, threads calling for raids against the exhibit on /pol/ were amassing hundreds of replies, with suggestions ranging from leaving booby-trapped racist posters taped on top of razor blades so as to cut people who tried to remove them, to simply sending in “the right wing death squads” (Anonymous /pol/ poster 2017). Notably, in part because it was noted by the /pol/ brigaders, two of the three HWNDU artists, LaBeouf and Turner, are Jewish.

    Raid participants who coordinated on /pol/ and other far-right websites flashed white nationalist paraphernalia, neo-Nazi tattoos, and within five days of opening, directly told LaBeouf “Hitler did nothing wrong” while he was present at the exhibit (Horton 2017). LaBeouf was later arrested and charged with misdemeanor assault against one of the people who went to his art exhibit with the intent of disrupting it, though the charges were later dismissed (France 2017).

    On February 10, less than a month into the intended four-year run of the project, the Museum of the Moving Image released a statement declaring its intent to shut down HWNDU, perhaps at the urging of the NYPD, which had to dedicate resources to monitoring the space after regular clashes:

    The installation created a serious and ongoing public safety hazard for the museum, its visitors, its staff, local residents and businesses. The installation had become a flashpoint for violence and was disrupted from its original intent. While the installation began constructively, it deteriorated markedly after one of the artists was arrested at the site of the installation and ultimately necessitated this action. (Saad 2017)

    High-profile liberal advocates of free speech causes did not draw attention to the implications of a Jewish artist’s exhibit being cancelled due to constant harassment by neo-Nazis and other far-right elements. New York magazine’s Jonathan Chait, one of the most high-profile liberal opponents of “politically correct” suppression of speech, spent his time policing the limits of discourse by criticizing anti-fascist political activists (Chait 2017). The American Civil Liberties Union spent its energy defending former right-wing celebrity and noted pederasty advocate Milo Yiannopoulos against his critics (NPR 2017).

    Containment Failure

    Among those who sincerely believed themselves to be politically neutral or at least not far-right, 4chan’s leadership was mistaken to view far-right politics as simply another hobby, rather than the basis of an ideology.

    Ideology is not easily compartmentalized. Unlike a hobby, an ideology has the power to follow its adherents into all areas of their lives. Whether that ideology is cultivated in a “safe space” that is digital or physical, it is nonetheless brought with its possessor out into the world.

    Attempting to contain far-right ideology in physical and virtual spaces provides its followers with one of the essential requirements it needs to thrive and contribute to society’s reactionary movements.

    By way of comparison, the users of /mlp/ or other successful containment boards do not use their discussion space to organize raids and targeted harassment campaigns because, basically, hobbies do not traditionally have antagonists (with Gamergate being a notable exception). Adherents to far-right ideology, on the other hand, see liberal protesters, Hollywood activists, “cultural Marxists,” “globalist Jews,” white people comfortable with interracial marriages, black and brown people of all persuasions, and anti-fascist street fighters to be in direct opposition to their interests. When gathered with like-minded people, they will discuss the urgency of combating these forces and, if possible, encourage one another to act against these enemies.

    It seems obvious that a board which has been documented organizing campaigns to harass a Jewish artist until his art exhibit is shut down, or to attempt to force the closure of spaces they believe belong to the “far left,” is anything but contained.

    If anything, the DIY venue example shows exactly how the average /pol/ user views designated ideological spaces: leftists will use those venues to organize, they assert, and if we take that away, we can decrease their capacity. If a DIY venue meant the leftists would be contained, then it would be advantageous for them to remain and let leftists keep talking among themselves. Rather, the far-right /pol/ userbase demonstrates through their actions that they believe leftists use their political spaces in the same way as they do, as a base for launching attacks against their enemies.

    Countdown: What Comes Next

    The political right in the United States remains divided in tactics, aesthetics, and capacity.

    Footage surfaced of a June 10, 2017 rally in Houston, Texas, of an alt-right activist being choked by an Oath Keeper—a member of a right-wing paramilitary organization—following a disagreement (Kragie and Lewis 2017). The alt-right activist is clearly signaling his affiliation with the internet-fueled right one might find in or inspired by /pol/, displaying posters that represent several recognizable 4chan memes (Pepe, Wojak/”feels guy”, Baneposting), in addition to neo-Nazi imagery (a stylized SS in the words “The Fire Rises,” an American flag modified to contain the Nazi-associated Black Sun or Sonnenrad). Which element of his approach provoked the ire of the Oathkeepers—identified by the SPLC as one of the largest anti-government organizations in the country—is not clear (Southern Poverty Law Center 2017b). The differences between the far-right inspired by 4chan and the paramilitary far-right mostly derived from ex-military and ex-police may be mostly aesthetic, but these differences nonetheless matter.[3]

    None of this is to discount the threat to life posed by the young and awkward meme-spouting members of the far-right. Brandon Russell, aged 21, was found in possession of bomb-making materials including explosive chemicals and radioactive materials, and arrested by authorities in Florida. He admitted his affiliation with an online neo-nazi group called Atom Waffen, German for “Atomic weapon,” an SPLC-identified hate group (Southern Poverty Law Center 2017a).

    Russell was not found due to an investigation into terroristic far-right groups, but because of a bizarre series of events in which one of his three roommates, who claimed to have originally shared the neo-Nazi beliefs of the others, allegedly converted to Islam and murdered the other two for disrespecting his new faith. Police only found Russell’s bomb and radioactive materials while examining this crime scene (Elfrink 2017).

    The Trump regime and its Department of Justice, then headed by Jefferson Beauregard Sessions, indicated that it plans to cut off what little funding has been directed towards investigating far-right and white supremacist extremist groups, instead focusing purely on the specter of Islamic extremism (Pasha-Robinson 2017).

    By several metrics, far-right terrorism is a greater threat to Americans than terrorism connected to Islamism, and seems on track to maintain this record (Parkin et al. 2017).

    A federal judge ruled that Russell, who was found to own a framed photograph of Oklahoma City bomber Timothy McVeigh—whose ammonium nitrate bomb killed 168 people in 1995—may be released on bond, writing that there was no evidence that he used or planned to use a homemade radioactive bomb (Phillips 2017). Admitted affiliation with neo-Nazi ideology, which glorifies a regime known for massacring leftists, minorities, and Jews, was not taken as evidence of a desire to maim or kill leftists, minorities, or Jews.

    Just like the well-intentioned 4chan moderators who believed in the compartmentalization or “containability” of ideology, U.S. Magistrate Judge Thomas McCoun III seemed to believe that neo-Nazi ideology is little more than a hobby that can be pursued separately from one’s procurement and assembly of chemical bombs. McCoun did not consider that far-right politics is not a simple interest, but produces a worldview that generates answers to why one assembles a dirty bomb and how it is ultimately used.

    Judge McCoun only changed his mind and revoked the order to grant Russell bail after seeing video testimony from Russell’s former roommate, who claimed Russell planned to use a radioactive bomb to attack a nuclear power plant in Florida with the intention of irradiating ocean water and wiping out “parts of the Eastern Seaboard” (Sullivan 2017). Living with other neo-Nazis, it seems, gave Russell the confidence and safe space he needed to plan to carry out a McVeigh-style attack to inflict massive loss of life.[4]

    Finally, one should note that Russell, who may still be free were it not for the brash murders allegedly committed by his roommate, is also a member of the Florida National Guard. The internet far-right may look and sound quite differently from the paramilitary Oathkeepers today, but that difference may change in time, as well.

    _____

    Dennis Erasmus (pseudonym) (@erasmusNYT) lived in Charlottesville, Virginia for six years prior to 2016. He has studied political theory and was active on 4chan for roughly eight years.

    Back to the essay

    _____

    Notes
    [1] Statement posted by moot on Nov at the /tmp/ board at http://content.4chan.org/tmp/r9knew.txt, and previously archived at the Webcite 4chan archive http://www.webcitation.org/6159jR9pC, and accessed by the author on July 9, 2017. The archive was deleted in early 2019.

    [2] The New York Observer, now a web-only publication, came under the ownership of Jared Kushner, President Donald J. Trump’s son-in-law, in 2006. The Observer is one of relatively few papers to have endorsed Trump during the 2016 Republican primary.

    [3] The alt-right activist who said “these are good memes” is supposedly William Fears, who was present at the Charlottesville 2017 riot and was arrested later that year in connection with a shooting directed at anti-racist protesters in Florida. While Fears’ brother plead guilty to accessory after the fact for attempted first degree murder, charges were dropped against Fears so he could be extradited for Texas for hitting and choking his ex-girlfriend. See Brett Barrouquere, “Texas Judge Hikes Bond on White Supremacist William Fears” (SPLC, Apr 17, 2018) and Brett Barrouquere, “Cops Say Richard Spencer Supporter William Fears IV Choked Girlfriend Days Before Florida Shooting” (SPLC, Jan 23, 2018).

    [4] Russell pled guilty to possession of a unlicensed destructive device and improper storage of explosive materials. He was sentenced to five years in prison. U.S. District Judge Susan Bucklew said “it’s a difficult case” and that Russell seemed “like a very smart young man.” See “Florida Neo-Nazi Leader Gets 5 Years for Having Explosive Material” (AP, Jan 9, 2018).
    _____

    Works Cited

     

  • Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Take three minutes to watch this clip from a rally in New York City just after the 2016 presidential election.[i] In the impromptu interview, we learn that Donald Trump is going to “raise the ancient city of Thule” and “complete the system of German Idealism.” In what follows, I’m going to interpret what the troll in the video—known only by his twitter handle, @kantbot2000—is doing here. It involves Donald Trump, German Idealism, metaphysics, social media, and above all irony. It’s a diagnosis of the current relationship between mediated speech and politics. I’ll come back to Kantbot presently, but first I want to lay the scene he’s intervening in.

    A small but deeply networked group of self-identifying trolls and content-producers has used the apparently unlikely rubric of German philosophy to diagnose our media-rhetorical situation. There’s less talk of trolls now than there was in 2017, but that doesn’t mean they’re gone.[ii] Take the recent self-introductory op-ed by Brazil’s incoming foreign minister, Ernesto Araùjo, which bizarrely accuses Ludwig Wittgenstein of undermining the nationalist identity of Brazilians (and everyone else). YouTube remains the global channel of this Alt Right[iii] media game, as Andre Pagliarini has documented: one Olavo de Carvalho, whose channel is dedicated to the peculiar philosophical obsessions of the global Alt Right, is probably responsible for this foreign minister taking the position, apparently intended as policy, “I don’t like Wittgenstein,” and possibly for his appointment in the first place. The intellectuals playing this game hold that Marxist and postmodern theory caused the political world to take its present shape, and argue that a wide variety of theoretical tools should be reappropriated to the Alt Right. This situation presents a challenge to the intellectual Left on both epistemological and political grounds.

    The core claim of this group—one I think we should take seriously—is that mediated speech is essential to politics. In a way, this claim is self-fulfilling. Araùjo, for example, imagines that Wittgenstein’s alleged relativism is politically efficacious; Wittgenstein arrives pre-packaged by the YouTube phenomenon Carvalho; Araùjo’s very appointment seems to have been the result of Carvalho’s influence. That this tight ideological loop should realize itself by means of social media is not surprising. But in our shockingly naïve public political discussions—at least in the US—emphasis on the constitutive role of rhetoric and theory appears singular. I’m going to argue that a crucial element of this scene is a new tone and practice of irony that permeates the political. This political irony is an artefact of 2016, most directly, but it lurks quite clearly beneath our politics today. And to be clear, the self-styled irony of this group is never at odds with a wide variety of deeply held, and usually vile, beliefs. This is because irony and seriousness are not, and have never been, mutually exclusive. The idea that the two cannot cohabit is one of the more obvious weak points of our attempt to get an analytical foothold on the global Alt Right—to do so, we must traverse the den of irony.

    Irony has always been a difficult concept, slippery to the point of being undefinable. It usually means something like “when the actual meaning is the complete opposite from the literal meaning,” as Ethan Hawke tells Wynona Ryder in 1994’s Reality Bites. Ryder’s plaint, “I know it when I see it” points to just how many questions this definition raises. What counts as a “complete opposite”? What is the channel—rhetorical, physical, or otherwise—by which this dual expression can occur? What does it mean that what we express can contain not only implicit or connotative content, but can in fact make our speech contradict itself to some communicative effect? And for our purposes, what does it mean when this type of question embeds itself in political communication?

    Virtually every major treatment of irony since antiquity—from Aristotle to Paul de Man—acknowledges these difficulties. Quintilian gives us the standard definition: that the meaning of a statement is in contradiction to what it literally extends to its listener. But he still equivocates about its source:

    eo vero genere, quo contraria ostenduntur, ironia est; illusionem vocant. quae aut pronuntiatione intelligitur aut persona aut rei nature; nam, si qua earum verbis dissentit, apparet diversam esse orationi voluntatem. Quanquam id plurimis id tropis accidit, ut intersit, quid de quoque dicatur, quia quoddicitur alibi verum est.

    On the other hand, that class of allegory in which the meaning is contrary to that suggested by the words, involve an element of irony, or, as our rhetoricians call it, illusio. This is made evident to the understanding either by the delivery, the character of the speaker or the nature of the subject. For if any one of these three is out of keeping with the words, it at once becomes clear that the intention of the speaker is other than what he actually says. In the majority of tropes it is, however, important to bear in mind not merely what is said, but about whom it is said, since what is said may in another context be literally true. (Quintilian 1920, book VIII, section 6, 53-55)

    Speaker, ideation, context, addressee—all of these are potential sources for the contradiction. In other words, irony is not limited to the intentional use of contradiction, to a wit deploying irony to produce an effect. Irony slips out of precise definition even in the version that held sway for more than a millennium in the Western tradition.

    I’m going to argue in what follows that irony of a specific kind has re-opened what seemed a closed channel between speech and politics. Certain functions of digital, and specifically social, media enable this kind of irony, because the very notion of a digital “code” entailed a kind of material irony to begin with. This type of irony can be manipulated, but also exceeds anyone’s intention, and can be activated accidentally (this part of the theory of irony comes from the German Romantic Friedrich Schlegel, as we will see). It not only amplifies messages, but does so by resignifying, exploiting certain capacities of social media. Donald Trump is the master practitioner of this irony, and Kantbot, I’ll propose, is its media theorist. With this irony, political communication has exited the neoliberal speech regime; the question is how the Left responds.

    i. “Donald Trump Will Complete the System of German Idealism”

    Let’s return to our video. Kantbot is trolling—hard. There’s obvious irony in the claim that Trump will “complete the system of German Idealism,” the philosophical network that began with Immanuel Kant’s Critique of Pure Reason (1781) and ended (at least on Kantbot’s account) only in the 1840s with Friedrich Schelling’s philosophy of mythology. Kant is best known for having cut a middle path between empiricism and rationalism. He argued that our knowledge is spontaneous and autonomous, not derived from what we observe but combined with that observation and molded into a nature that is distinctly ours, a nature to which we “give the law,” set off from a world of “things in themselves” about which we can never know anything. This philosophy touched off what G.W.F. Hegel called a “revolution,” one that extended to every area of human knowledge and activity. History itself, Hegel would famously claim, was the forward march of spirit, or Geist, the logical unfolding of self-differentiating concepts that constituted nature, history, and institutions (including the state). Schelling, Hegel’s one-time roommate, had deep reservations about this triumphalist narrative, reserving a place for the irrational, the unseen, the mythological, in the process of history. Hegel, according to a legend propagated by his students, finished his 1807 Phenomenology of Spirit while listening to the guns of the battle of Auerstedt-Jena, where Napoleon defeated the Germans and brought a final end to the Holy Roman Empire. Hegel saw himself as the philosopher of Napoleon’s moment, at least in 1807; Kantbot sees himself as the Hegel to Donald Trump (more on this below).

    Rumor has it that Kantbot is an accountant in NYC, although no one has been able to doxx him yet. His twitter has more than 26,000 followers at the time of writing. This modest fame is complemented by a deep lateral network among the biggest stars on the Far Right. To my eye he has made little progress in gaining fame—but also in developing his theory, on which he has recently promised a book “soon”—in the last year. Conservative media reported that he was interviewed by the FBI in 2018. His newest line of thought involves “hate hoaxes” and questioning why he can’t say the n-word—a regression to platitudes of the extremist Right that have been around for decades, as David Neiwert has extensively documented (Neiwert 2017). Sprinkled between these are exterminationist fantasies—about “Spinozists.” He toggles between conspiracy, especially of the false-flag variety, hate-speech-flirtation, and analysis. He has recently started a podcast. The whole presentation is saturated in irony and deadly serious:

    Asked how he identifies politically, Kantbot recently claimed to be a “Stalinist, a TERF, and a Black Nationalist.” Mike Cernovich, the Alt Right leader who runs the website Danger and Play, has been known to ask Kantbot for advice. There is also an indirect connection between Kantbot and “Neoreaction” or NRx, a brand of “accelerationism” which itself is only blurrily constituted by the blog-work of Curtis Yarvin, aka Mencius Moldbug and enthusiasm for the philosophy of Nick Land (another reader of Kant). Kantbot also “debated” White Nationalist thought leader Richard Spencer, presenting the spectacle of Spencer, who wrote a Masters thesis on Adorno’s interpretation of Wagner, listening thoughtfully to Kantbot’s explanation of Kant’s rejection of Johann Gottfried Herder, rather than the body count, as the reason to reject Marxism.

    When conservative pundit Ann Coulter got into a twitter feud with Delta over a seat reassignment, Kantbot came to her defense. She retweeted the captioned image below, which was then featured on Breitbart News in an article called “Zuckerberg 2020 Would be a Dream Come True for Republicans.”

    Kantbot’s partner-in-crime, @logo-daedalus (the very young guy in the maroon hat in the video) has recently jumped on a minor fresh wave of ironist political memeing in support of UBI-focused presidential candidate, Andrew Yang – #yanggang. He was once asked by Cernovich if he had read Michael Walsh’s book, The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West:

    The autodidact intellectualism of this Alt Right dynamic duo—Kantbot and Logodaedalus—illustrates several roles irony plays in the relationship between media and politics. Kantbot and Logodaedalus see themselves as the avant-garde of a counterculture on the brink of a civilizational shift, participating in the sudden proliferation of “decline of the West” narratives. They alternate targets on Twitter, and think of themselves as “producers of content” above all. To produce content, according to them, is to produce ideology. Kantbot is singularly obsessed the period between about 1770 and 1830 in Germany. He thinks of this period as the source of all subsequent intellectual endeavor, the only period of real philosophy—a thesis he shares with Slavoj Žižek (Žižek 1993).

    This notion has been treated monographically by Eckart Förster in The Twenty-Five Years of Philosophy, a book Kantbot listed in May of 2017 under “current investigations.” His twist on the thesis is that German Idealism is saturated in a form of irony. German Idealism never makes culture political as such. Politics comes from a culture that’s more capacious than any politics, so any relation between the two is refracted by a deep difference that appears, when they are brought together, as irony. Marxism, and all that proceeds from Marxism, including contemporary Leftism, is a deviation from this path.


    This reading of German Idealism is a search for the metaphysical origins of a common conspiracy theory in the Breitbart wing of the Right called “cultural Marxism” (the idea predates Breibart: see Jay 2011; Huyssen 2017; Berkowitz 2003. Walsh’s 2017 The Devil’s Pleasure Palace, which LogoDaedalus mocked to Cernovich, is one of the touchstones of this theory). Breitbart’s own account states that there is a relatively straight line from Hegel’s celebration of the state to Marx’s communism to Woodrow Wilson’s and Franklin Delano Roosevelt’s communitarianism—and on to critical theory of Theodor W. Adorno and Herbert Marcuse (this is the actual “cultural Marxism,” one supposes), Saul Alinsky’s community organizing, and (surprise!) Barack Obama’s as well (Breitbart 2011, 105-37). The phrase “cultural Marxism” is a play on the Nazi phrase “cultural Bolshevism,” a conspiracy theory that targeted Jews as alleged spies and collaborators of Stalin’s Russia. The anti-Semitism is only slightly more concealed in the updated version. The idea is that Adorno and Marcuse took control of the cultural matrix of the United States and made the country “culturally communist.” In this theory, individual freedom is always second to an oppressive community in the contemporary US. Between Breitbart’s adoption of critical theory and NRx (see Haider 2017; Beckett 2017; Noys 2014)—not to mention the global expansion of this family of theories by figures like Carvalho—it’s clear that the “Alt Right” is a theory-deep assemblage. The theory is never just analysis, though. It’s always a question of intervention, or media manipulation (see Marwick and Lewis 2017).

    Breitbart himself liked to capture this blend in his slogan “politics is downstream from culture.” Breitbart’s news organization implicitly cedes the theoretical point to Adorno and Marcuse, trying to build cultural hegemony in the online era. Reform the cultural, dominate the politics—all on the basis of narrative and media manipulation. For the Alt Right, politics isn’t “online” or “not,” but will always be both.

    In mid-August of 2017, a flap in the National Security Council was caused by a memo, probably penned by staffer Rich Higgins (who reportedly has ties to Cernovich), that appeared to accuse then National Security Adviser, H. R. McMaster, of supporting or at least tolerating Cultural Marxism’s attempt to undermine Trump through narrative (see Winter and Groll 2017). Higgins and other staffers associated with the memo were fired, a fact which Trump learned from Sean Hannity and which made him “furious.” The memo, about which the president “gushed,” defines “the successful outcome of cultural Marxism [as] a bureaucratic state beholden to no one, certainly not the American people. With no rule of law considerations outside those that further deep state power, the deep state truly becomes, as Hegel advocated, god bestriding the earth” (Higgins 2017). Hegel defined the state as the goal of all social activity, the highest form of human institution or “objective spirit.” Years later, it is still Trump vs. the state, in its belated thrall to Adorno, Marcuse, and (somehow) Hegel. Politics is downstream from German Idealism.

    Kantbot’s aspiration was to expand and deepen the theory of this kind of critical manipulation of the media—but he wants to rehabilitate Hegel. In Kantbot’s work we begin to glimpse how irony plays a role in this manipulation. Irony is play with the very possibility of signification in the first place. Inflected through digital media—code and platform—it becomes not just play but its own expression of the interface between culture and politics, overlapping with one of the driving questions of the German cultural renaissance around 1800. Kantbot, in other words, diagnosed and (at least at one time) aspired to practice a particularly sophisticated combination of rhetorical and media theory as political speech in social media.

    Consider this tweet:



    After an innocuous webcomic frog became infamous in 2016, after the Clinton campaign denounced its use and the Anti-Defamation League took the extraordinary step of adding the meme to its Hate Database, Pepe the Frog gained a kind of cult status. Kantbot’s reading of the phenomenon is that the “point is demonstration of power to control meaning of sign in modern media environment.” If this sounds like French Theory, then one “Johannes Schmitt” (whose profile thumbnail appears to be an SS officer) agrees. “Starting to sound like Derrida,” he wrote. To which Kantbot responds, momentously: “*schiller.”



    The asterisk-correction contains multitudes. Kantbot is only too happy to jettison the “theory,” but insists that the manipulation of the sign in its relation to the media environment maintains and alters the balance between culture and politics. Friedrich Schiller, whose classical aesthetic theory claims just this, is a recurrent figure for Kantbot. The idea, it appears, is to create a culture that is beyond politics and from which politics can be downstream. To that end, Kantbot opened his own online venue, the “Autistic Mercury,” named after Der teutsche Merkur, one of the German Enlightenment’s central organs.[iv] For Schiller, there was a “play drive” that mediated between “form” and “content” drives. It preserved the autonomy of art and culture and had the potential to transform the political space, but only indirectly. Kantbot wants to imitate the composite culture of the era of Kant, Schiller, and Hegel—just as they built their classicism on Johann Winckelmann’s famous doctrine that an autonomous and inimitable culture must be built on imitation of the Greeks. Schiller was suggesting that art could prevent another post-revolutionary Terror like the one that had engulfed France. Kantbot is suggesting that the metaphysics of communication—signs as both rhetoric and mediation—could resurrect a cultural vitality that got lost somewhere along the path from Marx to the present. Donald Trump is the instrument of that transformation, but its full expression requires more than DC politics. It requires (online) culture of the kind the campaign unleashed but the presidency has done little more than to maintain. (Kantbot uses Schiller for his media analysis too, as we will see.) Spencer and Kanbot agreed during their “debate” that perhaps Trump had done enough before he was president to justify the disappointing outcomes of his actual presidency. Conservative policy-making earns little more than scorn from this crowd, if it is detached from the putative real work of building the Alt Right avant-garde.



    According to one commenter on YouTube, Kantbot is “the troll philosopher of the kek era.” Kek is the god of the trolls. His name is based on a transposition of the letters LOL in the massively-multiplayer online role-playing game World of Warcraft. “KEK” is what the enemy sees when you laugh out loud to someone on your team, in an intuitively crackable code that was made into an idol to worship. Kek—a half-fake demi-God—illustrates the balance between irony and ontology in the rhetorical media practice known as trolling.


    The name of the idol, it turned out, was also the name of an actual ancient Egyptian demi-god (KEK), a phenomenon that confirmed his divine status, in an example of so-called “meme magic.” Meme magic is when—often by praying to KEK or relying on a numerological system based on the random numbers assigned to users of 4Chan and other message boards—something that exists only online manifests IRL, “in real life” (Burton 2016). Examples include Hillary Clinton’s illness in the late stages of the campaign (widely and falsely rumored—e.g. by Cernovich—before a real yet minor illness was confirmed), and of course Donald Trump’s actual election. Meme magic is everywhere: it names the channel between online and offline.

    Meme magic is both drenched in irony and deeply ontological. What is meant is just “for the lulz,” while what is said is magic. This is irony of the rhetorical kind—right up until it works. The case in point is the election, where the result, and whether the trolls helped, hovers between reality and magic. First there is meme generation, usually playfully ironic. Something happens that resembles the meme. Then the irony is retroactively assigned a magical function. But statements about meme magic are themselves ironic. They use the contradiction between reality and rhetoric (between Clinton’s predicted illness and her actual pneumonia) as the generator of a second-order irony (the claim that Trump’s election was caused by memes is itself a meme). It’s tempting to see this just as a juvenile game, but we shouldn’t dismiss the way the irony scales between the different levels of content-production and interpretation. Irony is rhetorical and ontological at once. We shouldn’t believe in meme magic, but we should take this recursive ironizing function very seriously indeed. It is this kind of irony that Kantbot diagnoses in Trump’s manipulation of the media.

    ii. Coding Irony: Friedrich Schlegel, Claude Shannon, and Twitter

    The ongoing inability of the international press to cover Donald Trump in a way that measures the impact of his statements rather than their content stems from this use of irony. We’ve gotten used to fake news and hyperbolic tweets—so used to these that we’re missing the irony that’s built in. Every time Trump denies something about collusion or says something about the coal industry that’s patently false, he’s exploiting the difference between two sets of truth-valuations that conflict with one another (e.g. racism and pacifism). That splits his audience—something that the splitting of the message in irony allows—and works both to fight his “enemies” and to build solidarity in his base. Trump has changed the media’s overall expression, making not his statements but the very relation between content and platform ironic. This objective form of media irony is not to be confused with “wit.” Donald Trump is not “witty.” He is, however, a master of irony as a tool for manipulation built into the way digital media allow signification to occur. He is the master of an expanded sense of irony that runs throughout the history of its theory.

    When White Nationalists descended on Charlottesville, Virginia, on August 11, 2017, leading to the death of one counter-protester the next day, Trump dragged his feet in naming “racism.” He did, eventually, condemn the groups by name—prefacing his statements with a short consideration of the economy, a dog-whistle about what comes first (actually racism, for which “economy” has become an erstwhile cipher). In the interim, however, his condemnations of violence “as such” led Spencer to tweet this:

    Of course, two days later, Trump would explicitly blame the “Alt Left” for violence it did not commit. Before that, however, Spencer’s irony here relied on Trump’s previous—malicious—irony. By condemning “all” violence when only one kind of violence was at issue, Trump was attempting to split the signal of his speech. The idea was to let the racists know that they could continue through condemnation of their actions that pays lip service to the non-violent ideal of the liberal media. Spencer gleefully used the internal contradiction of Trump’s speech, calling attention to the side of the message that was supposed to be “hidden.” Even the apparently non-ironic condemnation of “both sides” exploited a contradiction not in the statement itself, but in the way it is interpreted by different outlets and political communities. Trump’s invocation of the “Alt Left” confirmed the suspicions of those on the Right, panics the Center, and all but forced the Left to adopt the term. The filter bubbles, meanwhile, allowed this single message to deliver contradictory meanings on different news sites—one reason headlines across the political spectrum are often identical as statements, but opposite in patent intent. Making the dog whistle audible, however, doesn’t spell the “end of the ironic Nazi,” as Brian Feldman commented (Feldman 2017). It just means that the irony isn’t opposed to but instead part of the politics. Today this form of irony is enabled and constituted by digital media, and it’s not going away. It forms an irreducible part of the new political situation, one that we ignore or deny at our own peril.

    Irony isn’t just intentional wit, in other words—as Quintilian already knew. One reason we nevertheless tend to confuse wit and irony is that the expansion of irony beyond the realm of rhetoric—usually dated to Romanticism, which also falls into Kantbot’s period of obsession—made irony into a category of psychology and style. Most treatments of irony take this as an assumption: modern life is drenched in the stuff, so it isn’t “just” a trope (Behler 1990). But it is a feeling, one that you get from Weird Twitter but also from the constant stream of Facebooks announcements about leaving Facebook. Quintilian already points the way beyond this gestural understanding. The problem is the source of the contradiction. It is not obvious what allows for contradiction, where it can occur, what conditions satisfy it, and thus form the basis for irony. If the source is dynamic, unstable, then the concept of irony, as Paul de Man pointed out long ago, is not really a concept at all (de Man 1996).

    The theoretician of irony who most squarely accounts for its embeddedness in material and media conditions is Friedrich Schlegel. In nearly all cases, Schlegel writes, irony serves to reinforce or sharpen some message by means of the reflexivity of language: by contradicting the point, it calls it that much more vividly to mind. (Remember when Trump said, in the 2016 debates, that he refused to invoke Bill Clinton’s sexual history for Chelsea’s sake?) But there is another, more curious type:

    The first and most distinguished [kind of irony] of all is coarse irony; to be found most often in the actual nature of things and which is one of its most generally distributed substances [in der wirklichen Natur der Dinge und ist einer ihrer allgemein verbreitetsten Stoffe]; it is most at home in the history of humanity (Schlegel 1958-, 368).





    In other words, irony is not merely the drawing of attention to formal or material conditions of the situation of communication, but also a widely distributed “substance” or capacity in material. Twitter irony finds this substance in the platform and its underlying code, as we will see. If irony is both material and rhetorical, this means that its use is an activation of a potential in the interface between meaning and matter. This could allow, in principle, an intervention into the conditions of signification. In this sense, irony is the rhetorical term for what we could call coding, the tailoring of language to channels in technologies of transmission. Twitter reproduces an irony that built into any attempt to code language, as we are about to see. And it’s the overlap of code, irony, and politics that Kantbot marshals Hegel to address.

    Coded irony—irony that is both rhetorical and digitally enabled—exploded onto the political scene in 2016 through Twitter. Twitter was the medium through which the political element of the messageboards has broken through (not least because of Trump’s nearly 60 million followers, even if nearly half of them are bots). It is far from the only politicized social medium, as a growing literature is describing (Philips and Milner, 2017; Phillips 2016; Milner 2016; Goerzen 2017). But it has been a primary site of the intimacy of media and politics over the course of 2016 and 2017, and I think that has something to do with twitter itself, and with the relationship between encoded communications and irony.

    Take this retweet, which captures a great deal about Twitter:

    “Kim Kierkegaardashian,” or @KimKierkegaard, joined twitter in June 2012 and has about 259,00 followers at the time of writing. The account mashes up Kardashian’s self- and brand-sales oriented tweet style with the proto-existentialism of Søren Kierkegaard. Take, for example, an early tweet from 8 July, 2012: “I have majorly fallen off my workout-eating plan! AND it’s summer! But to despair over sin is to sink deeper into it.” The account sticks close to Kardashian’s actual tweets and Kierkegaard’s actual words. In the tweet above, from April 2017, @KimKierkegaard has retweeted Kardashian herself incidentally formulating one of Kierkegaard’s central ideas in the proprietary language of social media. “Omg” as shorthand takes the already nearly entirely secular phrase “oh my god” and collapses any trace of transcendence. The retweet therefore returns us to the opposite extreme, in which anxiety points us to the finitude of human existence in Kierkegaard. If we know how to read this, it is a performance of that other Kierkegaardian bellwether, irony.

    If you were to encounter Kardashian’s tweet without the retweet, there would be no irony at all. In the retweet, the tweet is presented as an object and resignified as its opposite. Note that this is a two-way street: until November 2009, there were no retweets. Before then, one had to type “RT” and then paste the original tweet in. Twitter responded, piloting a button that allows the re-presentation of a tweet (Stone 2009). This has vastly contributed to the sense of irony, since the speaker is also split between two sources, such that many accounts have some version of “RTs not endorsements” in their description. Perhaps political scandal is so often attached to RTs because the source as well as the content can be construed in multiple different and often contradictory ways. Schlegel would have noted that this is a case where irony swallows the speaker’s authority over it. That situation was forced into the code by the speech, not the other way around.

    I’d like to call the retweet a resignificatory device, distinct from amplificatory. Amplificatory signaling cannibalizes a bit of redundancy in the algorithm: the more times your video has been seen on YouTube, the more likely it is to be recommended (although the story is more complicated than that). Retweets certainly amplify the original message, but they also reproduce it under another name. They have the ability to resignify—as the “repost” function on Facebook also does, to some extent.[v] Resignificatory signaling takes the unequivocal messages at the heart of the very notion of “code” and makes them rhetorical, while retaining their visual identity. Of course, no message is without an effect on its receiver—a point that information theory made long ago. But the apparent physical identity of the tweet and the retweet forces the rhetorical aspect of the message to the fore. In doing so, it draws explicit attention to the deep irony embedded in encoded messages of any kind.

    Twitter was originally written in the object-oriented programming language and module-view-controller (MVC) framework Ruby on Rails, and the code matters. Object-oriented languages allow any term to be treated either as an object or as an expression, making Shannon’s observations on language operational.[vi] The retweet is an embedding of this ability to switch any term between these two basic functions. We can do this in language, of course (that’s why object-oriented languages are useful). But when the retweet is presented not as copy-pasted but as a visual reproduction of the original tweet, the expressive nature of the original tweet is made an object, imitating the capacity of the coding language. In other words, Twitter has come to incorporate the object-oriented logic of its programming language in its capacity to signify. At the level of speech, anything can be an object on Twitter—on your phone, you literally touch it and it presents itself. Most things can be resignified through one more touch, and if not they can be screencapped and retweeted (for example, the number of followers one has, a since-deleted tweet, etc.). Once something has come to signify in the medium, it can be infinitely resignified.

    When, as in a retweet, an expression is made into an object of another expression, its meaning is altered. This is because its source is altered. A statement of any kind requires the notion that someone has made that statement. This means that a retweet, by making an expression into an object, exemplifies the contradiction between subject and object—the very contradiction on which Kant had based his revolutionary philosophy. Twitter is fitted, and has been throughout its existence retrofitted, to generalize this speech situation. It is the platform of the subject-object dialectic, as Hegel might have put it. By presenting subject and object in a single statement—the retweet as expression and object all at once—Twitter embodies what rhetorical theory has called irony since the ancients. It is irony as code. This irony resignifies and amplifies the rhetorical irony of the dog whistle, the troll, the President.

    Coding is an encounter between two sets of material conditions: the structure of a language, and the capacity of a channel. This was captured in truly general form for the first time in Claude Shannon’s famous 1948 paper, “A Mathematical Theory of Communication,” in which the following diagram is given:

    Shannon’s achievement was a general formula for the relation between the structure of the source and the noise in the channel.[vii] If the set of symbols can be fitted to signals complex or articulated enough to arrive through the noise, then nearly frictionless communication could be engineered. The source—his preferred example was written English—had a structure that limited its “entropy.” If you’re looking at one letter in English, for example, and you have to guess what the next one will be, you theoretically have 26 choices (including a space). But the likelihood, if the letter you’re looking at is, for example, “q,” that the next letter will be “u” is very high. The likelihood for “x” is extremely low. The higher likelihood is called “redundancy,” a limitation on the absolute measure of chaos, or entropy, that the number of elements imposes. No source for communication can be entirely random, because without patterns of one kind or another we can’t recognize what’s being communicated.[viii]

    We tend to confuse entropy and the noise in the channel, and it is crucial to see that they are not the same thing. The channel is noisy, while the source is entropic. There is, of course, entropy in the channel—everything is subject to the second law of thermodynamics, without exception. But “entropy” is not in any way comparable to noise in Shannon, because “entropy” is a way of describing the conditional restraints on any structured source for communication, like the English language, the set of ideas in the brain, or what have you. Entropy is a way to describe the opposite of redundancy in the source, it expresses probability rather than the slow disintegration, the “heat death,” with which it is usually associated.[ix] If redundancy = 1, we have a kind of absolute rule or pure pattern. Redundancy works syntactically, too: “then” or “there” after the phrase “see you” is a high-level redundancy that is coded into SMS services.

    This is what Shannon calls a “conditional restraint” on the theoretical absolute entropy (based on number of total parts), or freedom in choosing a message. It is also the basis for autocorrect technologies, which obviously have semantic effects, as the genre of autocorrect bloopers demonstrates.

    A large portion of Shannon’s paper is taken up with calculating the redundancy of written English, which he determines to be nearly 50%, meaning that half the letters can be removed from most sentences or distorted without disturbing our ability to understand them.[x]

    The general process of coding, by Shannon’s lights, is a manipulation of the relationship between the structure of the source and the capacity of the channel as a dynamic interaction between two sets of evolving rules. Shannon’s statement that the “semantic aspects” of messages were “irrelevant to the engineering problem” has often been taken to mean he played fast and loose with the concept of language (see Hayles 1999; but see also Liu 2010; and for the complex history of Shannon’s reception Floridi 2010). But rarely does anyone ask exactly what Shannon did mean, or at least conceptually sketch out, in his approach to language. It’s worth pointing to the crucial role that source-structure redundancy plays in his theory, since it cuts close to Schlegel’s notion of material irony.

    Neither the source nor the channel is static. The scene of coding is open to restructuring at both ends. English is evolving; even its statistical structure changes over time. The channels, and the codes use to fit source to them, are evolving too. There is no guarantee that integrated circuits will remain the hardware of the future. They did not yet exist when Shannon published his theory.

    This point can be hard to see in today’s world, where we encounter opaque packets of already-established code at every turn. It would have been less hard to see for Shannon and those who followed him, since nothing was standardized, let alone commercialized, in 1948. But no amount of stack accretion can change the fact that mediated communication rests on the dynamic relation between relative entropy in the source and the way the channel is built.

    Redundancy points to this dynamic by its very nature. If there is absolute redundancy, nothing is communicated, because we already know the message with 100% certainty. With no redundancy, no message arrives at all. In between these two extremes, messages are internally objectified or doubled, but differ slightly from one another, in order to be communicable. In other words, every interpretable signal is a retweet. Redundancy, which stabilizes communicability by providing pattern, also ensures that the rules are dynamic. There is no fully redundant message. Every message is between 0 and 1, and this is what allows it to function as expression or object. Twitter imitates the rules of source structure, showing that communication is the locale where formal and material constraints encounter one another. It illustrates this principle of communication by programming it into the platform as a foundational principle. Twitter exemplifies the dynamic situation of coding as Shannon defined it. Signification is resignification.

    If rhetoric is embedded this deeply into the very notion of code, then it must possess the capacity to change the situation of communication, as Schlegel suggested. But it cannot do this by fiat or by meme magic. The retweeted “this anxiety omg” hardly stands to change the statistical structure of English much. It can, however, point to the dynamic material condition of mediated signification in general, something Warren Weaver, who wrote a popularizing introduction to Shannon’s work, acknowledged:

    anyone would agree that the probability is low for such a sequence of words as “Constantinople fishing nasty pink.” Incidentally, it is low, but not zero; for it is perfectly possible to think of a passage in which one sentence closes with “Constantinople fishing,” and the next begins with “Nasty pink.” And we might observe in passing that the unlikely four-word sequence under discussion has occurred in a single good English sentence, namely the one above. (Shannon and Weaver 1964, 11)

    There is no further reflection in Weaver’s essay on this passage, but then, that is the nature of irony. By including the phrase “Constantinople fishing nasty pink” in the English language, Weaver has shifted its entropic structure, however slightly. This shift is marginal to our ability to communicate (I am amplifying it very slightly right now, as all speech acts do), but some shifts are larger-scale, like the introduction of a word or concept, or the rise of a system of notions that orient individuals and communities (ideology). These shifts always have the characteristic that Weaver points to here, which is that they double as expressions and objects. This doubling is a kind of generalized redundancy—or capacity for irony—built into semiotic systems, material irony flashing up into the rhetorical irony it enables. That is a Romantic notion enshrined in a founding document of the digital age.

    Now we can see one reason that retweeting is often the source of scandal. A retweet or repetition of content ramifies the original redundancy of the message and fragments the message’s effect. This is not to say it undermines that effect. Instead, it uses the redundancy in the source and the noise in the channel to split the message according to any one of the factors that Quintilian announced: speaker, audience, context. In the retweet, this effect is distributed across more than one of these areas, producing more than one contrary item, or internally multiple irony. Take Trump’s summer 2016 tweet of this anti-Semitic attack on Clinton—not a proper retweet, but a resignfication of the same sort:



    The scandal that ensued mostly involved the source of the original content (white supremacists), and Trump skated through the incident by claiming that it wasn’t anti-Semitic anyway, it was a sheriff’s star, and that he had only “retweeted” the content. In disavowing the content in separate and seemingly contradictory ways,[xi] he signaled that he was still committed to its content to his base, while maintaining that he wasn’t at the level of statement. The effect was repeated again and again, and is a fundamental part of our government now. Trump’s positions are neither new nor interesting. What’s new is the way he amplifies his rhetorical maneuvers in social media. It is the exploitation of irony—not wit, not snark, not sarcasm—at the level of redundancy to maintain a signal that is internally split in multiple ways. This is not bad faith or stupidity; it’s an invasion of politics by irony. It’s also a kind of end to the neoliberal speech regime.

    iii. Irony and Politics after 2016, or Uncommunicative Capitalism

    The channel between speech and politics is open—again. That channel is saturated in irony, of a kind we are not used to thinking about. In 2003, following what were widely billed as the largest demonstrations in the history of the world, with tens of millions gathering in the streets globally to resist the George W. Bush administration’s stated intent to go to war, the United States did just that, invading Iraq on 20 March of that year. The consequences of that war have yet to be fully assessed. But while it is clear that we are living in its long foreign policy shadow, the seemingly momentous events of 2016 echo 2003 in a different way. 2016 was the year that blew open the neoliberal pax between the media, speech, and politics.

    No amount of noise could prevent the invasion of Iraq. As Jodi Dean has shown, “communicative capitalism” ensured that the circulation of signs was autotelic, proliferating language and ideology sealed off from the politics of events like war or even domestic policy. She writes that:

    In communicative capitalism, however, the use value of a message is less important than its exchange value, its contribution to a larger pool, flow or circulation of content. A contribution need not be understood; it need only be repeated, reproduced, forwarded. Circulation is the context, the condition for the acceptance or rejection of a contribution… Some contributions make a difference. But more significant is the system, the communicative network. (Dean 2005, 56)

    This situation no longer entirely holds. Dean’s brilliant analysis—along with those of many others who diagnosed the situation of media and politics in neoliberalism (e.g. Fisher 2009; Liu 2004)—forms the basis for understanding what we are living through and in now, even as the situation has changed. The notion that the invasion of Iraq could have been stopped by the protests recalls the optimism about speech’s effect on national politics of the New Left in the 1960s and after (begging the important question of whether the parallel protests against the Vietnam War played a causal role in its end). That model of speech is no longer entirely in force. Dean’s notion of a kind of metastatic media with few if any contributions that “make a difference” politically has yielded to a concerted effort to break through that isolation, to manipulate the circulatory media to make a difference. We live with communicative capitalism, but added to it is the possibility of complex rhetorical manipulation, a political possibility that resides in the irony of the very channels that made capitalism communicative in the first place.

    We know that authoritarianism engages in a kind of double-speak, talks out of “both sides of its mouth,” uses the dog whistle. It might be unusual to think of this set of techniques as irony—but I think we have to. Trump doesn’t just dog-whistle, he sends cleanly separate messages to differing effect through the same statement, as he did after Charlottesville. This technique keeps the media he is so hostile to on the hook, since their click rates are dependent on covering whatever extreme statement he’d made that day. The constant and confused coverage this led to was then a separate signal sent through the same line—by means of the contradiction between humility and vanity, and between content and effect—to his own followers. In other words, he doesn’t use Twitter only to amplify his message, but to resignify it internally. Resignificatory media allows irony to create a vector of efficacy through political discourse. That is not exactly “communicative capitalism,” but something more like the field-manipulations recently described by Johanna Drucker: affective, indirect, non-linear (Drucker 2018). Irony happens to be the tool that is not instrumental, a non-linear weapon, a kind of material-rhetorical wave one can ride but not control. As Quinn Slobodian has been arguing, we have in no way left the neoliberal era in economics. But perhaps we have left its speech regime behind. If so, that is a matter of strategic urgency for the Left.

    iv. Hegelian Media Theory

    The new Right is years ahead on this score, in practice but also in analysis. In one of the first pieces in what has become a truly staggering wave of coverage of the NRx movement, Rosie Gray interviewed Kantbot extensively (Gray 2017). Gray’s main target was the troll Mencius Moldbug (Curtis Yarvin) whose political philosophy blends the Enlightenment absolutism of Frederick the Great with a kind of avant-garde corporatism in which the state is run not on the model of a corporation but as a corporation. On the Alt Right, the German Enlightenment is unavoidable.

    In his prose, Kantbot can be quite serious, even theoretical. He responded to Gray’s article in a Medium post with a long quotation from Schiller’s 1784 “The Theater as Moral Institution” as its epigraph (Kanbot 2017b). For Schiller, one had to imitate the literary classics to become inimitable. And he thought the best means of transmission would be the theater, with its live audience and electric atmosphere. The Enlightenment theater, as Kantbot writes, “was not only a source of entertainment, but also one of radical political education.”

    Schiller argued that the stage educated more deeply than secular law or morality, that its horizon extended farther into the true vocation of the human. Culture educates where the law cannot. Schiller, it turns out, also thought that politics is downstream from culture. Kantbot finds, in other words, a source in Enlightenment literary theory for Breitbart’s signature claim. That means that narrative is crucial to political control. But Kantbot extends the point from narrative to the medium in which narrative is told.

    Schiller gives us reason to think that the arrangement of the medium—its physical layout, the possibilities but also the limits of its mechanisms of transmission—is also crucial to cultural politics (this is why it makes sense to him to replace a follower’s reference to Derrida with “*schiller”). He writes that “The theater is the common channel through which the light of wisdom streams down from the thoughtful, better part of society, spreading thence in mild beams throughout the entire state.” Story needs to be embedded in a politically effective channel, and politically-minded content-producers should pay attention to the way that channel works, what it can do that another means of communication—say, the novel—can’t.

    Kantbot argues that social media is the new Enlightenment Stage. When Schiller writes that the stage is the “common channel” for light and wisdom, he’s using what would later become Shannon’s term—in German, der Kanal. Schiller thought the channel of the stage was suited to tempering barbarisms (both unenlightened “savagery” and post-enlightened Terrors like Robespierre’s). For him, story in the proper medium could carry information and shape habits and tendencies, influencing politics indirectly, eventually creating an “aesthetic state.” That is the role that social media have today, according to Kantbot. In other words, the constraints of a putatively biological gender or race are secondary to their articulation through the utterly complex web of irony-saturated social media. Those media allow the categories in the first place, but are so complex as to impose their own constraint on freedom. For those on the Alt Right, accepting and overcoming that constraint is the task of the individual—even if it is often assigned mostly to non-white or non-male individuals, while white males achieve freedom through complaint. Consistency aside, however, the notion that media form their own constraint on freedom, and the tool for accepting and overcoming that constraint is irony, runs deep.

    Kantbot goes on to use Schiller to critique Gray’s actual article about NRx: “Though the Altright [sic] is viewed primarily as a political movement, a concrete ideology organizing an array of extreme political positions on the issues of our time, I believe that understanding it is a cultural phenomena [sic], rather than a purely political one, can be an equally valuable way of conceptualizing it. It is here that the journos stumble, as this goes directly to what newspapers and magazines have struggled to grasp in the 21st century: the role of social media in the future of mass communication.” It is Trump’s retrofitting of social media—and now the mass media as well—to his own ends that demonstrates, and therefore completes, the system of German Idealism. Content production on social media is political because it is the locus of the interface between irony and ontology, where meme magic also resides. This allows the Alt Right to sync what we have long taken to be a liberal form of speech (irony) with extremist political commitments that seem to conflict with the very rhetorical gesture. Misogyny and racism have re-entered the public sphere. They’ve done so not in spite of but with the explicit help of ironic manipulations of media.

    The trolls sync this transformation of the media with misogynist ontology. Both are construed as constraints in the forward march of Trump, Kek, and culture in general. One disturbing version of the essentialist suggestion for understanding how Trump will complete the system of German Idealism comes from one “Jef Costello” (a troll named for a character in Alain Delon’s 1967 film, Le Samouraï)

    Ironically, Hegel himself gave us the formula for understanding exactly what must occur in the next stage of history. In his Philosophy of Right, Hegel spoke of freedom as “willing our determination.” That means affirming the social conditions that make the array of options we have to choose from in life possible. We don’t choose that array, indeed we are determined by those social conditions. But within those conditions we are free to choose among certain options. Really, it can’t be any other way. Hegel, however, only spoke of willing our determination by social conditions. Let us enlarge this to include biological conditions, and other sorts of factors. As Collin Cleary has written: Thus, for example, the cure for the West’s radical feminism is for the feminist to recognize that the biological conditions that make her a woman—with a woman’s mind, emotions, and drives—cannot be denied and are not an oppressive “other.” They are the parameters within which she can realize who she is and seek satisfaction in life. No one can be free of some set of parameters or other; life is about realizing ourselves and our potentials within those parameters.

    As Hegel correctly saw, we are the only beings in the universe who seek self-awareness, and our history is the history of our self-realization through increased self-understanding. The next phase of history will be one in which we reject liberalism’s chimerical notion of freedom as infinite, unlimited self-determination, and seek self-realization through embracing our finitude. Like it or not, this next phase in human history is now being shepherded by Donald Trump—as unlikely a World-Historical Individual as there ever was. But there you have it. Yes! Donald Trump will complete the system of German Idealism. (Costello 2017)

    Note the regular features of this interpretation: it is a nature-forward argument about social categories, universalist in application, misogynist in structure, and ultra-intellectual. Constraint is shifted not only from the social into the natural, but also back into the social again. The poststructuralist phrase “embracing our finitude” (put into the emphatic italics of Theory) underscores the reversal from semiotics to ontology by way of German Idealism. Trump, it seems, will help us realize our natural places in an old-world order even while pushing the vanguard trolls forward into the utopian future. In contrast to Kantbot’s own content, this reading lacks irony. That is not to say that the anti-Gender Studies and generally viciously misogynist agenda of the Alt Right is not being amplified throughout the globe, as we increasingly hear. But this dry analysis lack the lacks the manipulative capacity that understanding social media in German Idealist terms brings with it. It does not resignify.

    Costello’s understanding is crude compared with that of Kantbot himself. The constraints, for Kantbot, are not primarily those of a naturalized gender, but instead the semiotic or rhetorical structure of the media through which any naturalization flows. The media are not likely, in this vision, to end any gender regimes—but recognizing that such regimes are contingent on representation and the manipulation of signs has never been the sole property of the Left. That manipulation implies a constrained, rather than an absolute, understanding of freedom. This constraint is an important theoretical element of the Alt Right, and in some sense they are correct to call on Hegel for it. Their thinking wavers—again, ironically—between essentialism about things like gender and race, and an understanding of constraint as primarily constituted by the media.

    Kantbot mixes his andrism and his media critique seamlessly. The trolls have some of their deepest roots in internet misogyny, including so-called Men Right’s Activism and the hashtag #redpill. The red pill that Neo takes in The Matrix to exit the collective illusion is here compared to “waking up” from the “culturally Marxist” feminism that inflects the putative communism that pervades contemporary US culture. Here is Kantbot’s version:

    The tweet elides any difference between corporate diversity culture and the Left feminism that would also critique it, but that is precisely the point. Irony does not undermine (it rather bolsters) serious misogyny. When Angela Nagle’s book, Kill All Normies: Online Culture Wars from 4Chan and Tumblr to Trump and the Alt-Right, touched off a seemingly endless Left-on-Left hot-take war, Kantbot responded with his own review of the book (since taken down). This review contains a plea for a “nuanced” understanding of Eliot Rodger, who killed six people in Southern California in 2014 as “retribution” for women rejecting him sexually.[xii] We can’t allow (justified) disgust at this kind of content to blind us to the ongoing irony—not jokes, not wit, not snark—that enables this vile ideology. In many ways, the irony that persists in the heart of this darkness allows Kantbot and his ilk to take the Left more seriously than the Left takes the Right. Gender is a crucial, but hardly the only, arena in which the Alt Right’s combination of essentialist ontology and media irony is fighting the intellectual Left.

    In the sub-subculture known as Men Going Their Own Way, or MGTOW, the term “volcel” came to prominence in recent years. “Volcel” means “voluntarily celibate,” or entirely ridding one’s existence of the need for or reliance on women. The trolls responded to this term with the notion of an “incel,” someone “involuntarily celibate,” in a characteristically self-deprecating move. Again, this is irony: none of the trolls actually want to be celibate, but they claim a kind of joy in signs by recoding the ridiculous bitterness of the Volcel.

    Literalizing the irony already partly present in this discourse, sometime in the fall of 2016 the trolls started calling the Left –in particular the members of the podcast team Chapo Trap House and the journalist and cultural theorist Sam Kriss (since accused of sexual harassment)—“ironycels.” The precise definition wavers, but seems to be that the Leftists are failures at irony, “irony-celibate,” even “involuntarily incapable of irony.”

    Because the original phrase is split between voluntary and involuntary, this has given rise to reappropriations, for example Kriss’s, in which “doing too much irony” earns you literal celibacy.

    Kantbot has commented extensively, both in articles and on podcasts, on this controversy. He and Kriss have even gone head-to-head.[xiii]




    In the ironycel debate, it has become clear that Kantbot thinks that socialism has kneecapped the Left, but only sentimentally. The same goes for actual conservatism, which has prevented the Right from embracing its new counterculture. Leaving behind old ideologies is a symptom for standing at the vanguard of a civilizational shift. It is that shift that makes sense of the phrase “Trump will Complete the System of German Idealism.”

    The Left, LogoDaedalus intoned on a podcast, is “metaphysically stuck in the Bush era.” I take this to mean that the Left is caught in an endless cycle of recriminations about the neoliberal model of politics, even as that model has begun to become outdated. Kantbot writes, in an article called “Chapo Traphouse Will Never Be Edgy”:

    Capturing the counterculture changes nothing, it is only by the diligent and careful application of it that anything can be changed. Not politics though. When political ends are selected for aesthetic means, the mismatch spells stagnation. Counterculture, as part of culture, can only change culture, nothing outside of that realm, and the truth of culture which is to be restored and regained is not a political truth, but an aesthetic one involving the ultimate truth value of the narratives which pervade our lived social reality. Politics are always downstream. (Kantbot 2017a)

    Citing Breitbart’s motto, Kantbot argues that continents of theory separate him and LogoDaedalus from the Left. That politics is downstream from culture is precisely what Marx—and by extension, the contemporary Left—could not understand. On several recent podcasts, Kantbot has made just this argument, that the German Enlightenment struck a balance between the “vitality of aesthetics” and political engagement that the Left lost in the generation after Hegel.

    Kantbot has decided, against virtually every Hegel reader since Hegel and even against Hegel himself, that the system of German Idealism is ironic in its deep structure. It’s not a move we can afford to take lightly. This irony, generalized as Schlegel would have it, manipulates the formal and meta settings of communicative situations and thus is at the incipient point of any solidarity. It gathers community through mediation even as it rejects those not in the know. It sits at the membrane of the filter bubble, and—correctly used—has the potential to break or reform the bubble. To be clear, I am not saying that Kantbot has done this work. It is primarily Donald Trump, according to Kantbot’s own argument, who has done this work. But this is exactly what it means to play Hegel to Trump’s Napoleon: to provide the metaphysics for the historical moment, which happens to be the moment where social media and politics combine. Philosophy begins only after an early-morning sleepless tweetstorm once again determines a news cycle. Irony takes its proper place, as Schlegel had suggested, in human history, becoming a political weapon meant to manipulate communication.

    Kantbot was the media theorist of Trump’s ironic moment. The channeling of affect is irreducible, but not unchangeable: this is both the result of some steps we can only wish we’d taken in theory and used in politics before the Alt Right got there, and the actual core of what we might call Alt Right Media Theory. When they say “the Left can’t meme,” in other words, they’re accusing the socialist Left of being anti-intellectual about the way we communicate now, about the conditions and possibilities of social media’s amplifications of the capacity called irony that is baked in to cognition and speech so deeply that we can barely define it even partially. That would match the sense of medium we get from looking at Shannon again, and the raw material possibility with which Schlegel infused the notion of irony.

    This insight, along with its political activation, might have been the preserve of Western Marxism or the other critical theories that succeeded it. Why have we allowed the Alt Right to pick up our tools?

    Kantbot takes obvious pleasure in the irony of using poststructuralist tools, and claiming in a contrarian way that they really derive from a broadly construed German Enlightenment that includes Romanticism and Idealism. Irony constitutes both that Enlightenment itself, on this reading, and the attitude towards it on the part of the content-producers, the German Idealist Trolls. It doesn’t matter if Breitbart was right about the Frankfurt School, or if the Neoreactionaries are right about capitalism. They are not practicing what Hegel called “representational thinking,” in which the goal is to capture a picture of the world that is adequate to it. They are practicing a form of conceptual thinking, which in Hegel’s terms is that thought that is embedded in, constituted by, and substantially active within the causal chain of substance, expression, and history.[xiv] That is the irony of Hegel’s reincarnation after the end of history.

    In media analysis and rhetorical analysis, we often hear the word “materiality” used as a substitute for durability, something that is not easy to manipulate. What is material, it is implied, is a stabilizing factor that allows us to understand the field of play in which signification occurs. Dean’s analysis of the Iraq War does just this, showing the relationship of signs and politics that undermines the aspirational content of political speech in neoliberalism. It is a crucial move, and Dean’s analysis remains deeply informative. But its type—and even the word “material,” used in this sense—is, not to put too fine a point on it, neo-Kantian: it seeks conditions and forms that undergird spectra of possibility. To this the Alt Right has lodged a Hegelian eppur si muove, borrowing techniques that were developed by Marxists and poststructuralists and German Idealists, and remaking the world of mediated discourse. That is a political emergency in which the humanities have a special role to play—but only if we can dispense with political and academic in-fighting and turn our focus to our opponents. What Mark Fisher once called the “Vampire castle” of the Left on social media is its own kind of constraint on our progress (Fisher 2013). One solvent for it is irony in the expanded field of social media—not jokes, not snark, but dedicated theoretical investigation and exploitation of the rhetorical features of our systems of communication. The situation of mediated communication is part of the objective conjuncture of the present, one that the humanities and the Left cannot afford to ignore, and cannot avoid by claiming not to participate. The alternative to engagement is to cede the understanding, and quite possibly the curve, of civilization, to the global Alt Right.

    _____

    Leif Weatherby is Associate Professor of German and founder of the Digital Theory Lab at NYU. He is working on a book about cybernetics and German Idealism.

    Back to the essay

    _____

    Notes
    [i] Video here. The comment thread on the video generated a series of unlikely slogans for 2020: “MAKE TRANSCENDENTAL IDENTITY GREAT AGAIN,” “Make German Idealism real again,” and the ideological non sequitur “Make dialectical materialism great again.”

    [ii] Neiwert (2017) tracks the rise of extreme Right violence and media dissemination from the 1990s to the present, and is particularly good on the ways in which these movements engage in complex “double-talk” and meta-signaling techniques, including irony in the case of the Pepe meme.

    [iii] I’m going to use this term throughout, and refer readers to Chip Berlet’s useful resource: I’m hoping this article builds on a kind of loose consensus that the Alt Right “talks out of both sides of its mouth,” perhaps best crystallized in the term “dog whistle.” Since 2016, we’ve seen a lot of regular whistling, bigotry without disguise, alongside the rise of the type of irony I’m analyzing here.

    [iv] There is, in this wing of the Online Right, a self-styled “autism” that stands for being misunderstood and isolated.

    [v] Thanks to Moira Weigel for a productive exchange on this point.

    [vi] See the excellent critique of object-oriented ontologies on the basis of their similarities with object-oriented programming languages in Galloway 2013. Irony is precisely the condition that does not reproduce code representationally, but instead shares a crucial condition with it.

    [vii] The paper is a point of inspiration and constant return for Friedrich Kittler, who uses this diagram to demonstrate the dependence of culture on media, which, as his famous quip goes, “determine our situation.” Kittler 1999, xxxix.

    [viii] This kind of redundancy is conceptually separate from signal redundancy, like the strengthening or reduplicating of electrical impulses in telegraph wires. The latter redundancy is likely the first that comes to mind, but it is not the only kind Shannon theorized.

    [ix] This is because Shannon adopts Ludwig Boltzmann’s probabilistic formula for entropy. The formula certainly suggests the slow simplification of material structure, but this is irrelevant to the communications engineering problem, which exists only so long as there are the very complex structures called humans and their languages and communications technologies.

    [x] Shannon presented these findings at one of the later Macy Conferences, the symposia that founded the movement called “cybernetics.” For an excellent account of what Shannon called “Printed English,” see Liu 2010, 39-99.

    [xi] The disavowal follows Freud’s famous “kettle logic” fairly precisely. In describing disavowal of unconscious drives unacceptable to the ego and its censor, Freud used the example of a friend who returns a borrowed kettle broken, and goes on to claim that 1) it was undamaged when he returned it, 2) it was already damaged when he borrowed it, and 3) he never borrowed it in the first place. Zizek often uses this logic to analyze political events, as in Zizek 2005. Its ironic structure usually goes unremarked.

    [xii] Kantbot, “Angela Nagle’s Wild Ride,” http://thermidormag.com/angela-nagles-wild-ride/, visited August 15, 2017—link currently broken.

    [xiii] Kantbot does in fact write fiction, almost all of which is science-fiction-adjacent retoolings of narrative from German Classicism and Romanticism. The best example is his reworking of E.T.A. Hoffmann’s “A New Year’s Eve Adventure,” “Chic Necromancy,” Kantbot 2017c.

    [xiv] I have not yet seen a use of Louis Althusser’s distinction between representation and “theory” (which relies on Hegel’s distinction) on the Alt Right, but it matches their practice quite precisely.

    _____

    Works Cited

    • Beckett, Andy. 2017. “Accelerationism: How a Fringe Philosophy Predicted the Future We Live In.” The Guardian (May 11).
    • Behler, Ernst. 1990. Irony and the Discourse of Modernity. Seattle: University of Washington.
    • Berkowitz, Bill. 2003. “ ‘Cultural Marxism’ Catching On.” Southern Poverty Law Center.
    • Breitbart, Andrew. 2011. Righteous Indignation: Excuse Me While I Save the World! New York: Hachette.
    • Burton, Tara. 2016. “Apocalypse Whatever: The Making of a Racist, Sexist Religion of Nihilism on 4chan.” Real Life Mag (Dec 13).
    • Costello, Jef. 2017. “Trump Will Complete the System of German Idealism!” Counter-Currents Publishing (Mar 10).
    • de Man, Paul. 1996. “The Concept of Irony.” In de Man, Aesthetic Ideology. Minneapolis: University of Minnesota. 163-185.
    • Dean, Jodi. 2005. “Communicative Capitalism: Circulation and the Foreclosure of Politics.” Cultural Politics 1:1. 51-74.
    • Drucker, Johanna. The General Theory of Social Relativity. Vancouver: The Elephants.
    • Feldman, Brian. 2017. “The ‘Ironic’ Nazi is Coming to an End.” New York Magazine.
    • Fisher, Mark. 2009. Capitalist Realism: Is There No Alternative? London: Zer0.
    • Fisher, Mark. 2013. “Exiting the Vampire Castle.” Open Democracy (Nov 24).
    • Floridi, Luciano. 2010. Information: A Very Short Introduction. Oxford: Oxford.
    • Galloway, Alexander. 2013. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39:2. 347-66.
    • Goerzen, Matt. 2017. “Notes Towards the Memes of Production.” texte zur kunst (Jun).
    • Gray, Rosie. 2017. “Behind the Internet’s Dark Anti-Democracy Movement.” The Atlantic (Feb 10).
    • Haider, Shuja. 2017. “The Darkness at the End of the Tunnel: Artificial Intelligence and Neorreaction.” Viewpoint Magazine.
    • Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
    • Higgins, Richard. 2017. “POTUS and Political Warfare.” National Security Council Memo.
    • Huyssen, Andreas. 2017. “Breitbart, Bannon, Trump, and the Frankfurt School.” Public Seminar (Sep 28).
    • Jay, Martin. 2011. “Dialectic of Counter-Enlightenment: The Frankfurt School as Scapegoat of the Lunatic Fringe.” Salmagundi 168/169 (Fall 2010-Winter 2011). 30-40. Excerpt at Canisa.Org.
    • Kantbot (as Edward Waverly). 2017a. “Chapo Traphouse Will Never Be Edgy
    • Kantbot. 2017b. “All the Techcomm Blogger’s Men.” Medium.
    • Kantbot. 2017c. “Chic Necromancy.” Medium.
    • Kittler, Friedrich. 1999. Gramophone, Film, Typewriter. Translated by Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford University Press.
    • Liu, Alan. 2004. “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse.” Critical Inquiry 31:1. 49-84.
    • Liu, Lydia. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Marwick, Alice and Rebecca Lewis. 2017. “Media Manipulation and Disinformation Online.” Data & Society.
    • Milner, Ryan. 2016. The World Made Meme: Public Conversations and Participatory Media. Cambridge: MIT.
    • Neiwert, David. 2017. Alt-America: The Rise of the Radical Right in the Age of Trump. New York: Verso.
    • Noys, Benjamin. 2014. Malign Velocities: Accelerationism and Capitalism. London: Zer0.
    • Phillips, Whitney and Ryan M. Milner. 2017. The Ambivalent Internet: Mischief, Oddity, and Antagonism Online. Cambridge: Polity.
    • Phillips, Whitney. 2016. This is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. Cambridge: The MIT Press.
    • Quintilian. 1920. Institutio Oratoria, Book VIII, section 6, 53-55.
    • Schlegel, Friedrich. 1958–. Kritische Friedrich-Schlegel-Ausgabe. Vol. II. Edited by Ernst Behler, Jean Jacques Anstett, and Hans Eichner. Munich: Schöningh.
    • Shannon, Claude, and Warren Weaver. 1964. The Mathematical Theory of Communication. Urbana: University of Illinois Press.
    • Stone, Biz. 2009. “Retweet Limited Rollout.” Press release. Twitter (Nov 6).
    • Walsh, Michael. 2017. The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West. New York: Encounter Books.
    • Winter, Jana and Elias Groll. 2017. “Here’s the Memo that Blew Up the NSC.” Foreign Policy (Aug 10).
    • Žižek, Slavoj. 1993. Tarrying with the Negative: Kant, Hegel and the Critique of Ideology. Durham: Duke, 1993.
    • Žižek, Slavoj. 2005. Iraq: The Borrowed Kettle. New York: Verso.

     

  • Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Endless Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth. (Hale 1853, ix)

    As this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor, reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of reference so that our stars can shine, since the problem of who precisely is “worthy of commemoration” so often seems to exclude women.  This essay takes on one of the “tests” used to determine whether content is worthy of inclusion in Wikipedia, notability, to explore how the purportedly neutral concept works against efforts to create entries about female historical figures.

    According to Wikipedia “notability,” a subject is considered notable if it  “has received significant coverage in reliable sources that are independent of the subject.” (“Wikipedia:Notability” 2017)   To a historian of women, the gender biases implicit in these criteria are immediately recognizable; for most of written history, women were de facto considered unworthy of consideration (Smith 2000). Unsurprisingly, studies have pointed to varying degrees of bias in coverage of female figures in Wikipedia compared to male figures.  One study of Encyclopedia Britannica and Wikipedia concluded,

    Overall, we find evidence of gender bias in Wikipedia coverage of biographies. While Wikipedia’s massive reach in coverage means one is more likely to find a biography of a woman there than in Britannica, evidence of gender bias surfaces from a deeper analysis of those articles each reference work misses. (Reagle and Rhue 2011)

    Five years later, another study found this bias persisted; women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that for women born prior to the 20th century, the problem of exclusion was wildly exacerbated by “sourcing and notability issues” (“Gender Bias on Wikipedia” 2017).

    One potential source for buttressing the case of notable women has been identified by literary scholar Alison Booth.  Booth identified more than 900 volumes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular (Booth 2004). Booth also points out that, lest we consign the genre to the realm of mere curiosity, the volumes were “indispensable aids in the formation of nationhood” (Booth 2004, 3).

    To reveal the historical contingency of the purportedly neutral criteria of notability, I utilized longitudinal data compiled by Booth which reveals that notability has never been the stable concept Wikipedia’s standards take it to be.  Since notability alone cannot explain which women make it into Wikipedia, I then turn to a methodology first put forth by historian Mary Ritter Beard in her critique of the Encyclopedia Britannica to identify missing entries (Beard 1977). Utilizing Notable American Women, as a reference corpus, I calculated the inclusion of individual women from those volumes in Wikipedia (Boyer and James 1971).  In this essay I extend that analysis to consider the difference between notability and notoriety from a historical perspective.  One might be well known while remaining relatively unimportant from a historical perspective.  Such distinctions are collapsed in Wikipedia, assuming that a body of writing about a historical subject stands as prima facie evidence of notability.

    While inclusion in Notable American Women does not necessarily translate into presence in Wikipedia, looking at the categories of women that have higher rates of inclusion offers insights into how female historical figures do succeed in Wikipedia.  My analysis suggests that criterion of notability restricts the women who succeed in obtaining pages in Wikipedia to those who mirror “the ‘Great Man Theory’ of history (Mattern 2015)  or are “notorious”  (Lerner 1975).

    Alison Booth has compiled a list of the most frequently mentioned women in a subset of female prosopographical volumes and tracked their frequency over time (2004, 394–396).   She made this data available on the web, allowing for the creation of Figure 1 which focuses on the inclusion of US historical figures in volumes published from 1850 to 1930.

    Figure 1. US women by publication date of books that included them (image source: author)
    Figure 1. US women by publication date of books that included them (image source: author)

    This chart clarifies what historians already know: notability is historically specific and contingent. For example, Mary Washington, mother of the first president, is notable in the nineteenth century but not in the twentieth. She drops off because over time, motherhood alone ceases to be seen as a significant contribution to history.  Wives of presidents remain quite popular, perhaps because they were at times understood as playing an important political role, so Mary Washington’s daughter-in-law Martha still appears in some volumes in the latter period. A similar pattern may be observed for foreign missionary Anne Hasseltine Judson in the twentieth century.  The novelty of female foreign missionaries like Judson faded as more women entered the field.  Other figures, like Laura Bridgman, “the first deaf-blind American child to gain a significant education in the English language,” were supplanted by later figures in what might be described as the “one and done” syndrome, where only a single spot is allotted for a specific kind of notable woman (“Laura Bridgman” 2017). In this case, Bridgman likely fell out of favor as Helen Keller’s fame rose.

    Although their notability changed over time, all the women depicted in figure 1 have Wikipedia pages; this is unsurprising as they were among the most mentioned women in the sort of volumes Wikipedia considers “reliable sources.” But what about more contemporary examples?  Does inclusion in a relatively recent work that declares women as notable mean that these women would meet Wikipedia’s notability standards? To answer this question, I relied on a methodology of calculating missing biographies in Wikipedia, utilizing a reference corpus to identify women who might reasonably be expected to appear in Wikipedia and to calculate the percentage that do not. Working with the digitized copy of Notable American Women in the Women and Social Movements database, I compiled a missing biographies quotient for individuals in selected sections of the “classified list of biographies” that appear at the end of the third volume of Notable American Women. The eleven categories with no missing entries offer some insights into how women do succeed in Wikipedia (Table 1).

    Classification % missing
    Astronomers 0
    Biologists 0
    Chemists & Physicists 0
    Heroines 0
    Illustrators 0
    Indian Captives 0
    Naturalists 0
    Psychologists 0
    Sculptors 0
    Wives of Presidents 0

    Table 1. Classifications from Notable American Women with no missing biographies in Wikipedia

    Characteristics that are highly predictive of success in Wikipedia for women include association with a powerful man, as in the wives of presidents, and recognition in a male-dominated field of science, social science and art. Additionally, extraordinary women, such as heroines, and those who are quite rare, such as Indian captives, also have a greater chance of success in Wikipedia.[1]

    Further analysis of the classifications with greater proportions of missing women reflects Gerda Lerner’s complaint that the history of notable women is the story of exceptional or deviant women (Lerner 1975).  “Social worker,” which has the highest percentage of missing biographies at 67%, illustrates that individuals associated with female-dominated endeavors are less likely to be considered notable unless they rise to a level of exceptionalism (Table 2).

    Name Included?
    Dinwiddie, Emily Wayland

    no

    Glenn, Mary Willcox Brown

    no

    Kingsbury, Susan Myra

    no

    Lothrop, Alice Louise Higgins

    no

    Pratt, Anna Beach

    no

    Regan, Agnes Gertrude

    no

    Breckinridge, Sophonisba Preston

    page

    Richmond, Mary Ellen

    page

    Smith, Zilpha Drew

    stub

    Table 2. Social Workers from Notable American Women by inclusion in Wikipedia

    Sophonisba Preston Breckinridge’s Wikipedia entry describes her as “an American activist, Progressive Era social reformer, social scientist and innovator in higher education” who was also “the first woman to earn a Ph.D. in political science and economics then the J.D. at the University of Chicago, and she was the first woman to pass the Kentucky bar” (“Sophonisba Breckinridge” 2017). While the page points out that “She led the process of creating the academic professional discipline and degree for social work,” her page is not linked to the category of American social workers (“Category:American Social Workers” 2015).  If a female historical figure isn’t as exceptional as Breckinridge, she needs to be a “first” like Mary Ellen Richmond who makes it into Wikipedia as the  “social work pioneer” (“Mary Richmond” 2017).

    This conclusion that being a “first” facilitates success in Wikipedia is supported by analysis of the classification of nurses. Of the ten nurses who have Wikipedia entries, 80% are credited with some sort of temporally marked achievement, generally a first or pioneering role (Table 3).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder?
    Delano, Jane Arminda leading pioneer World War I founder of the American Red Cross Nursing Service
    Fedde, Sister Elizabeth* established the Norwegian Relief Society
    Maxwell, Anna Caroline pioneering activities Spanish-American War
    Nutting, Mary Adelaide world’s first professor of nursing World War I founded the American Society of superintendents of Training Schools for Nurses
    Richards, Linda first professionally trained American nurse, pioneering modern nursing in the United States No Richards pioneered the founding and superintending of nursing training schools across the nation.
    Robb, Isabel Adams Hampton early leader (held many “first” positions) No helped to found …the National League for Nursing, the International Council of Nurses, and the American Nurses Association.
    Stimson, Julia Catherine first woman to attain the rank of Major World War I
    Wald, Lillian D. coined the term “public health nurse” & the founder of American community nursing No founded Henry Street Settlement
    Mahoney, Mary Eliza first African American to study and work as a professionally trained nurse in the US No co-founded the National Association of Colored Graduate Nurses
    Thoms, Adah B. Samuels World War I co-founded the National Association of Colored Graduate Nurses

    * Fredde appears in Wikipedia primarily as a Norwegian Lutheran Deaconess. The word “nurse” does not appear on her page.

    Table 3. Classifications from Notable American Women with no missing biographies in Wikipedia

    As the entries for nurses reveal, in addition to being first, a combination of several additional factors work in a female subject’s favor in achieving success in Wikipedia.  Nurses who founded an institution or organization or participated in a male-dominated event already recognized as historically significant, such as war, were more successful than those who did not.

    If distinguishing oneself, by being “first” or founding something, as part of a male-dominated event facilitates higher levels of inclusion in Wikipedia for women in female dominated fields, do these factors also explain how women from classifications that are not female-dominated succeed? Looking at labor leaders, it appears these factors can offer only a partial explanation (Table 4).

    Individual Was she a first? Was she a participant in a male-dominated historical event? Was she a founder? Description from Wikipedia
    Bagley, Sarah G. “probably the first”  No formed the Lowell Female Labor Reform Association headed up female department of newspaper until fired because “a female department. … would conflict with the opinions of the mushroom aristocracy … and beside it would not be dignified”
    Barry, Leonora Marie Kearney “only woman” “first woman” KNIGHTS OF LABOR “difficulties faced by a woman attempting to organize men in a male-dominated society.
     Employers also refused to allow her to investigate their factories.”
    Bellanca, Dorothy Jacobs  “first full-time female organizer”  No 0rganized the Baltimore buttonhole makers into Local 170 of the United Garment Workers of America, one of four women who attended founding convention of Amalgamated Clothing Workers of America   “ “men resented” her
    Haley, Margaret Angela “pioneer leader”  No  No dubbed the “lady labor slugger”
    Jones, Mary Harris  No KNIGHTS OF LABOR IWW “most dangerous woman in America”
    Nestor, Agnes  No WOMEN’S TRADE UNION LEAGUE founded  International Glove Workers Union
    O’Reilly, Leonora  No WOMEN’S TRADE UNION LEAGUE founded the Wage Earners Suffrage League “O’Reilly as a public speaker was thought to be out of place for women at this time in New York’s history.”
    O’Sullivan, Mary Kenney the first woman AFL employed WOMEN’S TRADE UNION LEAGUE founder of the Women’s Trade Union League
    Stevens, Alzina Parsons first probation officer KNIGHTS OF LABOR

    Table 4. Classifications from Notable American Women with no missing biographies in Wikipedia

    In addition to being a “first” or founding something, two other variables emerge from the analysis of labor leaders that predict success in Wikipedia.  One is quite heartening: affiliation with the Women’s Trade Union League (WTUL), a significant female-dominated historical organization, seems to translate into greater recognition as historically notable.  Less optimistically, it also appears that what Lerner labeled as “notorious” behavior predicts success: six of the nine women were included for a wide range of reasons, from speaking out publicly to advocating resistance.

    The conclusions here can be spun two ways. If we want to get women into Wikipedia, to surmount the obstacle of notability, we should write about women who fit well within the great man school of history. This could be reinforced within the architecture of Wikipedia by creating links within a woman’s entry to men and significant historical events, while also making sure that the entry emphasizes a woman’s “firsts” and her institutional ties. Following these practices will make an entry more likely to overcome challenges and provide a defense against proposed deletion.  On the other hand, these are narrow criteria for meeting notability that will likely not encompass a wide range of female figures from the past.

    The larger question remains: should we bother to work in Wikipedia at all? (Raval 2014). Wikipedia’s content is biased not only by gender, but also by race and region (“Racial Bias on Wikipedia” 2017).   A concrete example of this intersectional bias can be seen if the fact that “only nine of Haiti’s 37 first ladies have Wikipedia articles, whereas all 45 first ladies of the United States have entries” (Frisella 2017).  Critics have also pointed to the devaluation of Indigenous forms of knowledge within Wikipedia (Senier 2014; Gallart and van der Velden 2015).

    Wikipedia, billed as “the encyclopedia anyone can edit” and purporting to offer “the sum of all human knowledge,” is notorious for achieving neither goal. Wikipedia’s content suffers from systemic bias related to the unbalanced demographics of its contributor base (Wikipedia, 2004, 2009c). I have highlighted here disparities in gendered content, which parallel the well-documented gender biases against female contributors (“Wikipedia:WikiProject Countering Systemic Bias” 2017).   The average editor of Wikipedia is white, from Western Europe or the United States, between 30-40, and overwhelmingly male.   Furthermore,  “super users” contribute most of Wikipedia’s content.  A 2014 analysis revealed that  “the top 5,000 article creators on English Wikipedia have created 60% of all articles on the project.  The top 1,000 article creators account for 42% of all Wikipedia articles alone.”   A study of a small sample of these super users revealed that they are not writing about women.  “The amount of these super page creators only exacerbates the [gender] problem, as it means that the users who are mass-creating pages are probably not doing neglected topics, and this tilts our coverage disproportionately towards male-oriented topics” (Hale 2014).  For example, the “List of Pornographic Actresses” on Wikipedia is lengthier and more actively edited than the “List of Female Poets” (Kleeman 2015).

    The hostility within Wikipedia against female contributors remains a significant barrier to altering its content since the major mechanism for rectifying the lack of entries about women is to encourage women to contribute them (New York Times 2011; Peake 2015; Paling 2015).   Despite years of concerted efforts to make Wikipedia more hospitable toward women, to organize editathons, and place Wikipedians in residencies specifically designed to add women to the online encyclopedia, the results have been disappointing (MacAulay and Visser 2016; Khan 2016). Authors of a recent study of  “Wikipedia’s infrastructure and the gender gap” point to “foundational epistemologies that exclude women, in addition to other groups of knowers whose knowledge does not accord with the standards and models established through this infrastructure” which includes “hidden layers of gendering at the levels of code, policy and logics” (Wajcman and Ford 2017).

    Among these policies is the way notability is implemented to determine whether content is worthy of inclusion.  The issues I raise here are not new; Adrianne Wadewitz, an early and influential feminist Wikipedian, noted in 2013 “A lack of diversity amongst editors means that, for example, topics typically associated with femininity are underrepresented and often actively deleted”(Wadewitz 2013). Wadewitz pointed to efforts to delete articles about Kate Middleton’s wedding gown, as well as the speedy nomination for deletion of an entry for reproductive rights activist Sandra Fluke.   Both pages survived, Wadewicz emphasized, reflecting the way in which Wikipedia guidelines develop through practice, despite their ostensible stability.

    This is important to remember – Wikipedia’s policies, like everything on the site, evolves and changes as the community changes. … There is nothing more essential than seeing that these policies on Wikipedia are evolving and that if we as feminists and academics want them to evolve in ways we feel reflect the progressive politics important to us, we must participate in the conversation. Wikipedia is a community and we have to join it. (Wadewitz 2013)

    While I have offered some pragmatic suggestions here about how to surmount the notability criteria in Wikipedia, I want to close by echoing Wadewitz’s sentiment that the greater challenge must be to question how notability is implemented in Wikipedia praxis.

    _____

    Michelle Moravec is an associate professor of history at Rosemont College.

    Back to the essay

    _____

    Notes

    [1] Seven of the eleven categories in my study with fewer than ten individuals have no missing individuals.

    _____

    Works Cited

  • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling — Origin Stories in the Genealogy of Cherokee Language Technology

    Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling — Origin Stories in the Genealogy of Cherokee Language Technology

    Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling [*]

    The intersection of digital studies and Indigenous studies encompasses both the history of Indigenous representation on various screens, and the broader rhetorics of Indigeneity, Indigenous practices, and Indigenous activism in relation to digital technologies in general. Yet the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code. This essay on digital Indigenous studies reflects on the social, historical, and cultural mediations involved in Indigenous production and uses of digital media by exploring moments in the integration of the Cherokee syllabary onto digital platforms. We focus on negotiations between the Cherokee Nation’s goal to extend their language and writing system, on the one hand, and the systems of standardization upon which digital technologies depend, such as Unicode, on the other.  The Cherokee syllabary is currently one of the most widely available North American Indigenous language writing systems on digital devices. As the language has become increasingly endangered, the Cherokee Nation’s revitalization efforts have expanded to include the embedding of the Cherokee syllabary in the Windows Operating System, Google search engine, Gmail, Wikipedia, Android, iPhone and Facebook.

    Figure 1. Wikipedia in Cherokee
    Figure 1. Wikipedia in Cherokee

    With the successful integration of the syllabary onto multiple platforms, the digital practices of Cherokees suggest the advantages and limitations of digital technology for Indigenous cultural and political survivance (Vizenor 2000).

    Our collaboration has resulted in a multi-voiced analysis across several essay sections. Hearne describes the ways that engaging with specific problems and solutions around “glitches” at the intersection of Indigenous and technological protocols opens up issues in the larger digital turn in Indigenous studies. Joseph Erb (Cherokee) narrates critical moments in the adoption of the Cherokee syllabary onto digital devices, drawn from his experience leading this effort at the Cherokee Nation language technology department. Connecting our conceptual work with community history, we include excerpts from an interview with Cherokee linguist Durbin Feeling—author of the Cherokee-English Dictionary and Erb’s close collaborator—about the history, challenges, and possibilities of Cherokee language technology use and experience. In the final section, Mark Palmer (Kiowa) presents an “indigital” framework to describe a range of possibilities in the amalgamations of Indigenous and technological knowledge systems (2009, 2012). Fragmentary, contradictory, and full of uncertainties, indigital constructs are hybrid and fundamentally reciprocal in orientation, both ubiquitous and at the same time very distant from the reality of Indigenous groups encountering the digital divide.

    Native to the Device

    Indigenous people have always been engaged with technological change. Indigenous metaphors for digital and networked space—such as the web, the rhizome, and the river—describe longstanding practices of mnemonic retrieval and communicative innovation using sign systems and nonlinear design (Hearne 2017). Jason Lewis describes the “networked territory” and “shared space” of digital media as something that has “always existed for Aboriginal people as the repository of our collected and shared memory. That hardware technology has made it accessible through a tactile regime in no way diminishes its power as a spiritual, cosmological, and mythical ‘realm’” (175). Cherokee scholar (and former programmer) Brian Hudson includes Sequoyah in a genealogy of Indigenous futurism as a representative of “Cherokee cyberpunk.” While retaining these scholars’ understanding of the technological sophistication and adaptability of Indigenous peoples historically and in the present, taking up a heuristic that recognizes the problems and disjunction between Indigenous knowledge and digital development also enables us to understand the challenges faced by communities encountering unequal access to computational infrastructures such as broadband, hardware, and software design. Tracing encounters between the medium specificity of digital devices and the specificity of Indigenous epistemologies returns us to the incommensurate purposes of the digital as both a tool for Indigenous revitalization and as a sociopolitical framework that makes users do things according to a generic pattern.

    The case of the localization of Cherokee on digital devices offers insights into the paradox around the idea of the “digital turn” explored in this b2o: An Online Journal special issue—that on the one hand, the digital turn “suggests that the objects of our world are becoming better versions of themselves. On the other hand, it suggests that these objects are being transformed so completely that they are no longer the things they were to begin with.” While the former assertion is reflected in the techno-positive orientation of much news coverage of the Cherokee adoption on the iPhone (Evans 2011) as well as other Indigenous initiatives such as video game production (Lewis 2014), the latter description of transformation beyond recognizable identity resembles the goals of various historical programs of assimilation, one of the primary “logics of elimination” that Patrick Wolfe identifies in his seminal essay on settler colonialism.

    The material, representational, and participatory elements of digital studies have particular resonance in Indigenous studies around issues of land, language, political sovereignty, and cultural practice. In some cases the digital realm hosts or amplifies the imperial imaginaries pre-existing in the mediascape, as Jodi Byrd demonstrates in her analyses of colonial narratives—narratives of frontier violence in particular—normalized and embedded in the forms and forums of video games (2015). Indigeneity is also central to the materialities of global digitality in the production and dispensation of the machines themselves. Internationally, Indigenous lands are mined for minerals to make hardware and targeted as sites for dumping used electronics. Domestically in the United States, Indigenous communities have provided the labor to produce delicate circuitry (Nakamura 2014), even as rural, remote Indigenous communities and reservations have been sites of scarcity for digital infrastructure access (Ginsburg 2008). Indigenous communities such as those in the Cherokee Nation are rightly on guard against further colonial incursions, including those that come with digital environments. Communities have concerns about language localization projects: how are we going to use this for our own benefit? If it’s not for our benefit, then why not compute in the colonial language? Are they going to steal our medicine? Is this a further erosion of what we have left?

    Lisa Nakamura (2013) has taken up the concept of the glitch as a way of understanding online racism, first as it is understood by some critics as a form of communicative failure or “glitch racism,” and second as the opposite, “not as a glitch but as part of the signal,” an “effect of internet on a technical level” that comprises “a discursive act in itself, not an obstruction to that act.”  In this article we offer another way of understanding the glitch as a window onto the obstacles, refusals, and accommodations that take place at an infrastructural level in Indigenous negotiations of the digital. Olga Goriunova and Alexei Shulgin define “glitch” as “an unpredictable change in the system’s behavior, when something obviously goes wrong” (2008, 110).

    A glitch is a singular dysfunctional event that allows insight beyond the customary, omnipresent, and alien computer aesthetics. A glitch is a mess that is a moment, a possibility to glance at software’s inner structure, whether it is a mechanism of data compression or HTML code. Although a glitch does not reveal the true functionality of the computer, it shows the ghostly conventionality of the forms by which digital spaces are organized. (114)

    Attending to the challenges that arise in Indigenous-settler negotiations of structural obstacles—the work-arounds, problem-solving, false starts, failures of adoption—reveals both the adaptations summoned forth by the standardization built into digital platforms and the ways that Indigenous digital activists have intervened in digital homogeneity. By making visible the glitches—ruptures and mediations of rupture—in the granular work of localizing Cherokee, we arrive again and again at the cultural and political crossroads where Indigenous boundaries become visible within infrastructures of settler protocol (Ginsburg 1991). What has to be done, what has to be addressed, before Cherokee speakers can use digital devices in their own language and their own writing system, and what do those obstacles reveal about the larger orientation of digital environments? In particular, new digital platforms channel adaptations towards the bureaucratization of language, dictating the direction of language change through conventions like abbreviations, sorting requirements, parental controls and autocorrect features.

    Within the framework of computational standardization, Indigenous distinctiveness—Indigenous sovereignty itself—becomes a glitch. We can see instantiations of such glitches arising from moments of politicized refusal, as defined by Mohawk scholar Audra Simpson’s insight that “a good is not a good for everyone” (1). Yet we can also see moments when Indigenous refusals “to stop being themselves” (2) lead to strategies of negotiation and adoption, and even, paradoxically, to a politics of accommodation (itself a form of agency) in the uptake of digital technologies. Michelle Raheja takes up the intellectual and aesthetic iterations of sovereignty to theorize Indigenous media production in terms of “visual sovereignty,” which she defines as “the space between resistance and compliance” within which Indigenous media-makers “revisit, contribute to, borrow from, critique, and reconfigure” film conventions, while still “operating within and stretching the boundaries of those same conventions” (1161). We suggest that like Indigenous self-representation on screen, Indigenous computational production occupies a “space between resistance and compliance,” a space which is both sovereigntist and, in its lived reality at the intersection of software standardization and Indigenous language precarity, glitchy.

    Our methodology, in the case study of Cherokee language technology development that follows, might be called “glitch retrieval.”  We focus on pulse points, moments, stories and small landmarks of adaptation, accommodation, and refusal in the adoption of Sequoyah’s Cherokee syllabary to mobile digital devices. In the face of the wave of publicity around digital apps (“there’s an app for that!”), the story of the Cherokee adoption is not one of appendage in the form of downloadable apps but rather the localization of the language as “native to the device.” Far from being a straightforward development, the process moved in fits and starts, beset with setbacks and surprises, delineating unique minority and endangered Indigenous language practices within majoritarian protocols. To return to Goriunova and Shulgin’s definition, we explore each glitch as an instance of “a mess” that is also “a moment, a possibility,” one that “allows insight” (2008). Each of the brief moments narrated below retrieves an intersection of problem and solution that reveals Indigenous presence as well as “the ghostly conventionality of the forms by which digital spaces are organized” (114). Retrieving the origin stories of Cherokee language technology—the stories of the glitches—gives us new ways to see both the limits of digital technology as it has been imagined and built within structures of settler colonialism, and the action and shape of Indigenous persistence through digital practices.

    Cherokee Language Technology and Mobile Devices

    Each generation is crucial to the survival of Indigenous languages. Adaptation, and especially adaptation to new technologies, is an important factor in Indigenous language persistence (Hermes et al 2016). The Cherokee, one of the largest of the Southeast tribes, were early adopters of language technologies, beginning with the syllabary writing system developed by Sequoyah between 1809 and 1820 and presented to the Cherokee Council in 1821. The circumstances of the development of the Cherokee syllabary are nearly unique in that 1) the writing system originated from the work of one man, and in the space of a single decade; and 2) in the fact that it was initiated and ultimately widely adopted from within the Indigenous community itself rather than being developed and introduced by non-Native missionaries, linguists or other outsiders.

    Unlike alphabetic writing based on individual phonemes, a syllabary consists of written symbols indicating whole syllables, which can be more easily developed and learned than alphabetic systems due to the stability of each syllable sound. The Cherokee Syllabary system uses written characters that represent consonant and vowel sounds, such as “Ꮉ”, which is the sound of “ma,” and Ꮀ, for the sound “ho.” The original writing of Sequoyah was done with a quill and pen, an inking process that involved cursive characters, but this handwritten orthography gave way to a block print character set for the Cherokee printing press (Cushman 2011). The Cherokee Phoenix was the first Native American newspaper in the Americas, published in Cherokee and English beginning in 1828. Since then, Cherokee people have adapted their language and writing system early and often to new technologies, from typewriters to dot matrix printers. This historical adaptation includes a millennial transformation from technologies that required training to access machines like specially-designed typewriters with Cherokee characters, to the embedding of the syllabary as a standard feature on all platforms for commercially available computers and mobile devices. Very few Indigenous languages have this level of computational integration—in part because very few Indigenous languages have their own writing systems—and the historical moments we present here in the technologization of the Cherokee language illustrate both problems and possibilities of language diversity in standardization-dependent platforms. In the following section, we offer a community-based history of Cherokee language technology in stories of the transmission of knowledge between two generations—Cherokee linguist Durbin Feeling, who began teaching and adapting the language in the 1960s, and Joseph Erb, who worked on digital language projects starting in the early 2000s—focusing on shifts in the uptake of language technology.

    In the early and mid-twentieth century, churches in the Cherokee Nation were among the sites for teaching and learning Cherokee literacy. Durbin Feeling grew up speaking Cherokee at home, and learned to read the language as a boy by following along as his father read from the Cherokee New Testament. He became fluent in writing the language while serving in the US military in Vietnam, when he would read the Book of Psalms in Cherokee. His curiosity about the language grew as he continued to notice the differences between the written Cherokee usage of the 1800s—codified in texts like the New Testament—and the Cherokee spoken by his community in the 1960s. Beginning with the bilingual program at Northeastern University (translating syllabic writing into phonetic writing), Feeling worked on Cherokee language lessons and a Cherokee dictionary, for which he translated words from a Webster’s dictionary, on handwritten index cards, to a recorder. Feeling recalls that in the early 1970s,

    Back then they had reel to reel recorders and so I asked for one of those and talked to anybody and everybody and mixed groups, men and women, men with men, women with women. Wherever there were Cherokees, I would just walk up and say do you mind if I just kind of record while you were talking, and they didn’t have a problem with that. I filled up those reel to reel tapes, five of them….I would run it back and forth every word, and run it forward and back again as many times as I had to, and then I would hand write it on a bigger card.

    So I filled, I think, maybe about five of those in a shoe box and so all I did was take the word, recorded it, take the next word, recorded it, and then through the whole thing…

    There was times the churches used to gather and cook some hog meat, you know. It would attract the people and they would just stand around and joke and talk Cherokee. Women would meet and sew quilts and they’d have some conversations going, some real funny ones. Just like that, you know? Whoever I could talk with. So when I got done with that I went back through and noticed the different kinds of sounds…the sing song kind of words we had when we pronounced something (Erb and Feeling 2016).

    The project began with handwriting in syllabary, but the dictionary used phonetics with tonal markers, so Feeling went through each of five boxes of index cards again, labeling them with numbers to indicate the height of sounds and pitches.

    Feeling and his team experimented with various machines, including manual typewriters with syllabary keys (manufactured by the well-known Hermes typewriter company), new fonts using a dot matrix printer, and electric typewriters with Cherokee syllabary in the ball key—the typist had to memorize the location of all 85 keys. Early attempts to build computer programs allowing users to type in Cherokee resulted in documents that were confined to one computer and could not be easily shared except through printing documents.

    Figure 2, Typewriter keyboard in Cherokee
    Figure 2. Typewriter keyboard in Cherokee (image source: authors)

    Beginning around 1990, a number of linguists and programmers with interests in Indigenous languages began working with the Cherokee, including Al Webster, who used Mac computers to create a program that, as Feeling described it, “introduced what you could do with fonts with a fontographer—he’s the one who made those fonts that were just like the old print, you know way back in the eighteen hundreds.” Then in the mid-1990s Michael Everson began working with Feeling and others to integrate Cherokee glyphs into Unicode, the primary system for software internationalization. Arising from discussions between engineers at Apple and Xerox, Unicode began in late 1987 as a project to standardize languages for computation. Although the original goal of Unicode was to encode all world writing systems, major languages came first. Michael Everson’s company Evertype has been critical to broader language inclusion, encoding minority and Indigenous languages such as Cherokee, which was added to the Unicode Standard in 1999 with the release of version 3.0.

    Having begun language work with handwritten index cards in the 1960s, and later typewriters available to only one or two people with specialized skills, Feeling saw Cherokee adopted into Unicode in 1999, and integrated into Apple computer operating systems in 2003. When Apple and the Cherokee Nation publicized the new localization of Cherokee on the 4.1 iPhone in December 2010, the story was picked up internationally, as well as locally among Cherokee communities. By 2013, users could text, email, and search Google in the syllabary on smartphones and laptops, devices that came with the language already embedded as a standardized feature and that were available at chain stores like Walmart. This development involved different efforts at multiple locations, sometimes simultaneously, and over time. While Apple adopted Unicode-compliant Cherokee glyphs to the Macintosh in 2003, the Cherokee Nation, as a government entity, used PC computers rather than Macs. PCs had yet to implement Unicode-compliant Cherokee Fonts, so there was little access to the writing system on their computers and no known community adoption. At the time, the Cherokee Nation was already using an adapted English font that displayed Cherokee characters but was not Unicode compliant.

    One of the first attempts to introduce Unicode-compliant Cherokee font and keyboard came with the Indigenous Language Institute conference at Northeastern State University in Oklahoma in 2006, where the Institute made the font available on flash drives and provided training to language technologists at the Cherokee Nation. However, the program was not widely adopted due to anticipated wait times in getting the software installed on Cherokee Nation computers. Further, the majority of users did not understand the difference between the new Unicode compliant fonts and the non-Unicode fonts they were already using. The non-Unicode Cherokee font and keyboard adapted the same keystrokes, and looked the same on screen as the Unicode compliant system, but certain keys (especially those for punctuation) produced glyphs that would not transfer between computers, so files could not be sent and re-opened on another computer without requiring extensive corrections. The value of Unicode compliance involves the additional interoperability to move between systems, the crucial first step towards integration with mobile devices, which are more useful in remote communities than desktop computers. Addition to Unicode is the first of five steps—including development of CLDR, open source font, keyboard layout design, and a word frequency list—before companies can encode a new language into their platforms for computer operating systems. These five steps act as space of exchange between Indigenous writing systems and digital platforms, within which differences are negotiated.

    CLDR

    The Common Local Data Repository (CLDR) is a set of key terms for localization, including months, days, years, countries, and currencies, as well as their abbreviations. This core information is localized on the iPhone and becomes the base which calendars and other native and external apps feed from on the device. Many Indigenous languages, including Cherokee, don’t have bureaucratic language, such as abbreviations for days of the week, and need to create them—Translation Department and Language Technology Department worked together to create new Cherokee abbreviations for calendrical terms.

    Figure 3. Weather in Cherokee
    Figure 3. Weather in Cherokee (image source: authors)

    Open Source Font

    Small communities don’t have budgets to purchase fonts for their languages, and such fonts also aren’t financially viable for commercial companies to develop, so the challenge for minority language activists is to find sponsorship for the creation of an open source font that will work across systems, available for anyone to adopt into any computer or device system. Working with Feeling, Michael Everson developed the open source font for Cherokee. Plantagenet font (designed by Ross Mills) was the first to adopt Cherokee into Windows (Vista) and Mac (Panther).  If there is no font on a Unicode-compliant device—that is, the device does not have the language glyphs embedded—then users will see a string of boxes, the default filler for Unicode points that are not showing up in the system.

    Keyboard Layout

    New languages need an input method, and companies generally want the most widely used versions made available in open source. Cherokee has both a QWERTY keyboard, which is a phonetically-based Cherokee language keyboard, and a “Cherokee Nation” layout using the syllabary. Digital keyboards for mobile technologies are more complicated to create than physical keyboards and involve intricate collaboration between language specialists and developers. When developing the Cherokee digital keyboard for the iPhone, Apple worked in conjunction with the Translation Department and Language Technology Department at the Cherokee Nation, experimenting with several versions to accommodate the 85 Cherokee characters in the syllabary without creating too many alternate keyboards (the Cherokee Nation’s original involved 13 keyboards, whereas English has 3). Apple ultimately adapted a keyboard that involved two different ways of typing on the same keyboard, combining pop-up keys and an autocomplete system.

    Figure 4, Mobile device keyboard in Cherokee
    Figure 4. Mobile device keyboard in Cherokee (image source: authors)

    Word Frequency List

    The word frequency list is a standard requirement for most operating systems to support autocorrect spelling and other tasks on digital devices. Programmers need a word database, in Unicode, large enough to adequately source programs such as autocomplete. In order to generate the many thousands of words needed to seed the database, the Cherokee Nation had to provide Cherokee documents typed in the Unicode version of the language. But as with other languages, there were many older attempts to embed Cherokee in typewriters and computers that predate Unicode, leading to a kind of catch 22: The Cherokee Nation needed to already have documents produced in Unicode in order to get the language into computer and operating systems and adopted for mobile technologies, but they didn’t have many documents in Unicode because the language hadn’t yet been integrated into those Unicode-compliant systems. In the end the CN employed Cherokee speakers to create new documents in Unicode—re-typing the Cherokee Bible and other documents—to create enough words for a database. Their efforts were complicated by the existence of multiple versions of the language and spelling, and previous iterations of language technology and infrastructure.

    Translation

    Many of the English language words and phrases that are important to computational concepts, such as “security,” don’t have obvious equivalents in Cherokee (or as Feeling said, “we don’t have that”). How does one say “error message” in Cherokee? The CN Translation Department invented words—striving for both clarity and agreement—in order to address coding concepts for operating systems, error messages, and other phrases (which are often confusing even in English) as well as more general language such as the abbreviations discussed above. Feeling and Erb worked together with elders, CN staff, and professional Cherokee translators to invent descriptive Cherokee words for new concepts and technologies, such as ᎤᎦᏎᏍᏗ (u-ga-ha-s-di) or “to watch over something” for security; ᎦᎵᏓᏍᏔᏅ ᏓᎦᏃᏣᎳᎬᎯ (ga-li-da-s-ta-nv da-ga-no-tsa-la-gv-hi) or “something is wrong” for error message; ᎠᎾᎦᎵᏍᎩ ᎪᏪᎵ (a-na-ga-li-s-gi go-we-li) or “lightning paper” for email; and ᎠᎦᏙᎥᎯᏍᏗ ᎠᏍᏆᏂᎪᏗᏍᎩ (a-ga-no-v-hi-s-di a-s-qua-ni-go-di-s-gi) or “knowledge keeper” for computers. For English words like “luck” (as in “I’m feeling lucky,” a concept which doesn’t exist in Cherokee), they created new idioms, such as “ᎡᎵᏊ ᎢᎬᏱᏊ ᎠᏆᏁᎵᏔᏅ ᏯᏂᎦᏛᎦ” (e-li-quu i-gv-yi-quu a-qua-ne-li-ta-na ya-ni-ga-dv-ga) or “I think I’ll find it on the first try.”

    Sorting

    When the Unicode-compliant Plantagenet Cherokee font was first introduced in Microsoft Windows OS in Vista (2006), the company didn’t add Cherokee to the sorting function (the ability to sort files by numeric or alphabetic order) in its system. When Cherokee speakers named files in the language, they arrived at the limits of the language technology. These limits determine parameters in a user’s personal computing, the point at which naming files in Cherokee or keeping a computer calendar in Cherokee become forms of language activism that reveal the underlying dominance of English in the deeper infrastructure of computational systems. When a user sent a file with Cherokee characters, such as “ᏌᏊ” (sa-quu, or “one”) and “ᏔᎵ” (ta-li or “two”), receiving computers could not put the file into one place or another because the core operating system had no sorting order for the Unicode points of Cherokee, and the computer would crash. Sorting orders in Cherokee were not added to Microsoft until Windows 8.

    Parental Controls

    Part of the protocol for operating systems involves standard protections like parental controls—the ability to enable a program to automatically censor inappropriate language. In order to integrate Cherokee into an OS, the company needed lists of offensive language or “curse words” that could be flagged in parental restrictions settings for their operating system. Meeting the needs of these protocols was difficult linguistically and culturally, because Cherokee does not have the same cultural taboos as English around words for sexual acts or genitals; most Cherokee words are “clean words,” with offensive speech communicated through context rather than the words themselves. Also, because the Cherokee language involves tones, inappropriate meanings can arise from alternate tonal emphases (and the tone is not reflected in the syllabary). Elder Cherokee speakers found it culturally difficult to speak aloud those elements of Cherokee speech that are offensive, while non-Cherokee speaking computer company employees who had worked with other Indigenous languages did not always understand that not all Indigenous languages are alike—“curse words” in one language are not inappropriate in others. Finally, almost all of the potentially offensive Cherokee words that certain technology companies sought not only did not carry the same offensive connotation as its translation in English, but also carried dual or multiple meanings, and if blocked would also block a common word that had no inappropriate meaning.

    Mapping and Place Names

    One of the difficulties for Cherokees working to create Cherokee language country names and territories was the Cherokee Nation’s own exclusion from the lists. Speakers translated the names of even tiny nations into Cherokee for lists and maps in which the Cherokee Nation itself did not appear. Discussion of terminologies for countries and territories were frustrating because Cherokee themselves were not included, making colonial erasure of Indigenous nationhood and territories visible to Cherokee speakers as they did the translations. Erb is currently working with Google Maps to revise their digital maps to show federally recognized tribal nations’ territories.

    Passwords and Security

    One of the first attempts to introduce Unicode-compliant Cherokee on computers for the Immersion School, ᏣᎳᎩ ᏧᎾᏕᎶᏆᏍᏗ (tsa-la-gi tsu-na-de-lo-qua-s-di), involved problems and glitches that temporarily set back adoption of Unicode systems. The CN Language Technology Department added the Unicode-compliant font and keyboards on an Immersion School curriculum developer’s computer. However, at the time computers could only accept English passwords. After the curriculum developer had been typing in Cherokee and left their desk, their computer automatically logged off (auto-logoff is standard security for government computers). Temporarily locked out of their computer, they couldn’t switch their keyboard back to English to type the English password. Other teachers and translators heard about this “lockout” and most decided against having the new Unicode compliant fonts on their computers. Glitches like these slowed the roll out of Unicode-compliant fonts and set back the adoption process in the short term.

    Community Adoption

    When computers began to enter Cherokee communities, Feeling recalls his own hesitation about social media sites like Facebook: “I was afraid to use that.” When in 2011 there was a contested election for Chief of the Nation, and social media provided faster updates than traditional media, many community members signed up for Facebook accounts so they could keep abreast of the latest news about the election.

    Figure 5, Facebook in Cherokee
    Figure 5. Facebook in Cherokee (image source: authors)

    Similarly, when Cherokee first became available on the iPhone 4.1, many Cherokee people were reluctant to use it. Feeling says he was “scared that it wouldn’t work, like people would get mad or something.” But older speakers wanted to communicate with family members in Cherokee, and they provided the pressure for others to begin using mobile devices in the language. Feeling’s older brother, also a fluent speaker, bought an iPhone just to text with his brother in Cherokee, because his Android phone wouldn’t properly display the language.

    In 2009, the Cherokee Nation introduced Macintosh computers in a 1:1 computer-to-student ratio for the second and third grades of the Cherokee Immersion school, and gave students air cards to get wireless internet service at home through cell towers (because internet was unavailable in many rural Cherokee homes). Up to this point the students spoke in Cherokee at school, but rarely generalized their Cherokee language outside of school or spoke it at home. With these tools, students could—and did—get on FaceTime and iChat from home and in other settings to talk with classmates in Cherokee. For some parents, it was the first time they had heard their children speaking Cherokee at home. This success convinced many in the community of the worth of Cherokee language technologies for digital devices.

    The ultimate community adoption of Cherokee in digital forms—computers, mobile devices, search engines and social media—came when the technologies were most applicable to community needs. What worked was not clunky modems for desktops but iPhones that could function in communities without internet infrastructure. The story of Cherokee adoption into digital devices illustrates the pull towards English-language structures of standardization for Indigenous and minority language speakers, who are faced with challenges of skill acquisition and adaptation; language development histories that involve versions of orthographies, spellings, neologisms and technologies; and problems of abstraction from community context that accompany codifying practices. Facing the precarity of an eroding language base and the limitations and possibilities digital devices, the Cherokee and other Indigenous communities have strategically adapted hardware and software for cultural and political survivance. Durbin Feeling describes this adaptation as a Cherokee trait: “It’s the type of people that are curious or are willing to learn. Like we were in the old times, you know? I’m talking about way back, how the Cherokees adapted to the English way….I think it’s those kind of people that have continued in a good way to use and adapt to whatever comes along, be it the printing press, typewriters, computers, things like that. … Nobody can take your language away. You can give it away, yeah, or you can let it die, but nobody can take it away.”

    Indigital Frameworks

    Our case study reveals important processes in the integration of Cherokee knowledge systems with the information and communication technologies that have transformed notions of culture, society and space (Brey 2003). This kind of creative fusion is nothing new—Indigenous peoples have been encountering and exchanging with other peoples from around the world and adopting new materials, technologies, ideas, standards, and languages to meet their own everyday needs for millennia. The emerging concept indigital describes such encounters and collisions between the digital world and Indigenous knowledge systems, as highlighted in The Digital Arts and Humanities (Travis and von Lünen 2016). Indigital describes the hybrid blending or amalgamation of Indigenous knowledge systems including language, storytelling, calendar making, and song and dance, with technologies such as computers, Internet interfaces, video, maps, and GIS (Palmer 2009, 2012, 2013, 2016). Indigital constructs are forms of what Bruno Latour calls technoscience (1987), the merging of science, technology, and society—but while Indigenous peoples are often left out of global conversations regarding technoscience, the indigital framework attempts to bridge such conversations.

    Indigital constructs exist because knowledge systems like language are open, dynamic, and ever-changing; are hybrid as two or more systems mix, producing a third; require the sharing of power and space which can lead to reciprocity; and are simultaneously everywhere and nowhere (Palmer 2012). Palmer associates indigital frameworks with Indigenous North Americans and the mapping of Indigenous lands by or for Indigenous peoples using maps and GIS (2009; 2012; 2016). GIS is a digital mapping and database software used for collecting, manipulating, analyzing, and mapping various spatial phenomena. Indigenous language, place-names, and sacred sites often converge with GIS resulting in indigital geographic information networks. The indigital framework, however, can be applied to any encounter and exchange involving Indigenous peoples, technologies, and cultures.

    First, indigital constructs emerge locally, often when individuals or groups of individuals adopt and experiment with culture and technology within spaces of exchange, as happens in the moments of challenge and success in the integration of Cherokee writing systems to digital devices outlined in this essay. Within spaces of exchange, cultural systems like language and technology do not stand alone as dichotomous entities. Rather, they merge together creating multiplicity, uncertainty, and hybridization. Skilled humans, typewriters, index cards, file cabinets, language orthographies, Christian Bibles, printers, funding sources, transnational corporations, flash drives, computers, and cell-phones all work to stabilize and mobilize the digitization of the Cherokee language. Second, indigital constructs have the potential to flow globally; Indigenous groups and communities tap into power networks constructed by global transnational corporations, like Apple, Google, or IBM. Apple and Google are experts at creating standardized computer designs while connecting with a multitude of users. During negotiations with Indigenous communities, digital technologies are transformative and can be transformed. Finally, indigital constructs introduce different ways that languages can be represented, understood, and used. Differences associated with indigital constructs include variations in language translations, multiple meanings of offensive language, and contested place-names. Members of Indigenous communities have different experiences and reasons for adopting or rejecting the use of indigital constructs in the form of select digital devices like personal computers and cell-phones.

    One hopeful aspect in this process is the fact that Indigenous knowledge systems and digital technologies are combinable. The idea of combinability is based on the convergent nature of digital technologies and the creative intention of the artist-scientist. In fact, electronic technologies enable new forms from such combinations, like Cherokee language keyboards, Kiowa story maps and GIS, or Maori language dictionaries. Digital recordings of community members or elders telling important stories that hold lessons for future generations are becoming more widely available, made either using audio or visual devices or combination of both formats. Digital prints of maps can be easily carried to roundtables for discussion about the environment (Palmer 2016), with audiovisual images edited on digital devices and uploaded or downloaded to other digital devices and eventually connected to websites. The mapping of place-names, creation of Indigenous language keyboards, and integration of stories into GIS require standardization, yet those standards are often defined by technocrats far removed from Indigenous communities, with a lack of input from community members and elders. Whatever the intention of the elders telling the story or the digital artist creating the construction, this is an opportunity for the knowledge system and its accompanying information to be shared.

    Ultimately, how do local negotiations on technological projects influence final designs and representations? Indigital constructions (and spaces) are hybrid and require mixing at least two things to create a new third construct or third space (Bhabha 2006). Creation of a new Cherokee bureaucratic language to meet the needs of the iPhone CLDR requirements for representing calendar elements, with the negotiations between Cherokee language specialists and computer language specialists, resulted in hybrid space-times; a hybrid calendar shared as a form Cherokee-constructed technoscience. The same process applied to the development of specialized and now standardized Cherokee fonts and keyboards for the iPhone. A question for future research might be how much Unicode standardization transforms the Cherokee language in terms of meaning and understanding. What elements of Cherokee are altered and how are the new constructs interpreted by community members? How might Cherokee fonts and keyboards contribute to the sustainability of Indigenous culture and put language into practice?

    Survival of indigital constructs requires reciprocity between systems. Indigital constructions are not set up as one-way flows of knowledge and information. Rather, indigital constructions are spaces for negotiation, featuring the ideas and thoughts of the participants. Reciprocity in this sense means cross-cultural exchange on equal footing, as having too much power will consume any kind of rights-based approach to building bridges among all participants. One-way flows of knowledge are revealed when Cherokee or other Indigenous informants providing place-names to Apple, Microsoft, or Google realize that their own geographies are not represented. They are erased from the maps. Indigenous geographies are often trivialized as being local, vernacular, and particular to a culture which goes against the grain of technoscience standardization and universalization. The trick of indigital reciprocity is shared power, networking (Latour 2005), assemblages (Deleuze and Guattari 1988), decentralization, trust, and collective responsibility. If all these relations are in place, rights-based approaches to community problems have a chance of success.

    Indigital constructions are everywhere—Cherokee iPhone language applications or Kiowa stories in GIS are just a few examples, and many more occur in film, video, and other digital media types not discussed in this article. Yet, ironically, indigital constructions are also very distant from the reality of many Indigenous people on a global scale. Indigital constructions are primarily composed in the developed world, especially what is referred to as the global north. There is still a deep digital divide among Indigenous peoples and many Indigenous communities do not have access to digital technologies. How culturally appropriate are digital technologies like video, audio recordings, or digital maps? The indigital is distant in terms of addressing social problems within Indigenous communities. Oftentimes, there is a fear of the unknown in communities like the one described by Durbin Feeling in reference to adoption of social media applications like Facebook. Some Indigenous communities consider carefully the implications of adopting social media or language applications created for community interactions. Adoption may be slow, or not meet the expectations of software developers. Many questions arise in this process. Do creativity and social application go hand in hand? Sometimes we struggle to understand how our work can be applied to everyday problems. What is the potential of indigital constructions being used for rights-based initiatives?

    Conclusion

    English-speakers don’t often pause to consider how their language comes to be typed, displayed, and shared on digital devices. For Indigenous communities, the dominance of majoritarian languages on digital devices has contributed to the erosion of their language. While the isolation of many Indigenous communities in the past helped to protect their languages, that same isolation has required incredible efforts for minority language speakers to assert their presence in the infrastructures of technological systems. The excitement over the turn to digital media in Indian country is an easy story to tell to a techno-positive public, but in fact this turn involves a series of paradoxes: we take materials out of Indigenous lands to make our devices, and then we use them to talk about it; we assert sovereignty within the codification of standardized practices; we engage new technologies to sustain Indigenous cultural practices even as technological systems demand cultural transformation. Such paradoxes get to the heart of deeper questions about culturally-embedded technologies, as the modes and means of our communication shift to the screen. To what extent does digital media re-make the Indigenous world, or can it function just as a tool? Digital media are functionally inescapable and have come to constitute elements of our self-understanding; how might such media change the way Indigenous participants understand the world, even as they note their own absences from the screen? The insights from the technologization of Cherokee writing engage us with these questions along with closer insights into multiple forms of Indigenous information and communications technology and the emergence of indigital creations, inventing the next generation of language technology.

    _____

    Joseph Lewis Erb is a computer animator, film producer, educator, language technologist and artist enrolled in the Cherokee Nation. He earned his MFA from the University of Pennsylvania, where he created the first Cherokee animation in the Cherokee language, “The Beginning They Told.” He has used his artistic skills to teach Muscogee Creek and Cherokee students how to animate traditional stories. Most of this work is created in the Cherokee Language, and he has spent many years working on projects that will expand the use of Cherokee ​​language in technology and the arts. Erb is an assistant professor at the University of Missouri, teaching digital storytelling and animation.

    Joanna Hearne is associate professor in the English Department at the University of Missouri, where she teaches film studies and digital storytelling. She has published a number of articles on Indigenous film and digital media, animation, early cinema, westerns, and documentary, and she edited the 2017 special issue of Studies in American Indian Literatures on “Digital Indigenous Studies: Gender, Genre and New Media.” Her two books are Native Recognition: Indigenous Cinema and the Western (SUNY Press, 2012) and Smoke Signals: Native Cinema Rising (University of Nebraska Press, 2012).

    Mark H. Palmer is associate professor in the Department of Geography at the University of Missouri who has published research on institutional GIS and the mapping of Indigenous territories. Palmer is a member of the Kiowa Tribe of Oklahoma.

    Back to the essay

    _____

    Acknowledgements

    [*] The authors would like to thank Durbin Feeling for sharing his expertise and insights with us, and the University of Missouri Peace Studies Program for funding interviews and transcriptions as part of the “Digital Indigenous Studies” project.

    _____

    Works Cited

    • Bhabha, Homi K. and J. Rutherford. 2006. “Third Space.” Multitudes 3. 95-107.
    • Brey, P. 2003. “Theorizing Modernity and Technology.” In Modernity and Technology, edited by T.J. Misa, P. Brey, and A. Feenberg, 33-71. Cambridge: MIT Press.
    • Byrd, Jodi A. 2015. “’Do They Not Have Rational Souls?’: Consolidation and Sovereignty in Digital New Worlds.” Settler Colonial Studies: 1-15.
    • Cushman, Ellen. 2011. The Cherokee Syllabary: Writing the People’s Perseverance. Norman: University of Oklahoma Press.
    • Deleuze, Gilles, and Félix Guattari. 1988. A Thousand Plateaus: Capitalism and Schizophrenia. New York: Bloomsbury Publishing.
    • Feeling, Durbin and Joseph Erb. 2016. Interview with Durbin Feeling, Tahlequah, Oklahoma. 30 July.
    • Evans, Murray. 2011. “Apple Teams Up to Use iPhone to Save Cherokee Language.” Huffington Post (May 25).
    • Feeling, Durbin. 1975. Cherokee-English Dictionary. Tahlequah: Cherokee Nation of Oklahoma.
    • Ginsburg, Faye. 1991. “Indigenous Media: Faustian Contract or Global Village?” Cultural Anthropology 6:1. 92-112.
    • Ginsburg, Faye. 2008. “Rethinking the Digital Age.” In Global Indigenous Media: Culture, Poetics, and Politics, edited by Pamela Wilson and Michelle Stewart. Durham: Duke University Press. 287-306.
    • Goriunova, Olga and Alexei Shulgin. 2008. “Glitch.” In Software Studies: A Lexicon, edited by David Fuller. Cambridge, MA: MIT Press. 110-18.
    • Hearne, Joanna. 2017. “Native to the Device: Thoughts on Digital Indigenous Studies.” Studies in American Indian Literatures 29:1. 3-26.
    • Hermes, Mary, et al. 2016. “New Domains for Indigenous Language Acquisition and Use in the USA and Canada.” In Indigenous Language Revitalization in the Americas, edited by Teresa L. McCarty and Serafin M. Coronel-Molina. London: Routledge. 269-291.
    • Hudson, Brian. 2016. “If Sequoyah Was a Cyberpunk.” 2nd Annual Symposium on the Future Imaginary, August 5th, University of British Columbia-Okanagan, Kelowna, B.C.
    • Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.
    • Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press.
    • Lewis, Jason. 2014. “A Better Dance and Better Prayers: Systems, Structures, and the Future Imaginary in Aboriginal New Media.” In Coded Territories: Tracing Indigenous Pathways in New Media Art, edited by Steven Loft and Kerry Swanson. Calgary: University of Calgary Press. 49-78.
    • Manovich, Lev. 2002. The Language of New Media. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2013. “Glitch Racism: Networks as Actors within Vernacular Internet Theory.” Culture Digitally.
    • Nakamura, Lisa. 2014. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly 66:4. 919-941.
    • Palmer, Mark. 2016. “Kiowa Storytelling around a Map.” In Travis and von Lunen (2016). 63-73.
    • Palmer, Mark. 2013. “(In)digitizing Cáuigú Historical Geographies: Technoscience as a Postcolonial Discourse”. In History and GIS: Epistemologies, Considerations and Reflections, edited by A. von Lunen and C. Travis. Dordrecht, NLD: Springer Publishing. 39-58.
    • Palmer, Mark. 2012. “Theorizing Indigital Geographic Information Networks.“ Cartographica: The International Journal for Geographic Information and Geovisualization 47:2. 80-91.
    • Palmer, Mark. 2009. “Engaging with Indigital Geographic Information Networks.” Futures: The Journal of Policy, Planning and Futures Studies 41. 33-40.
    • Palmer, Mark and Robert Rundstrom. 2013. “GIS, Internal Colonialism, and the U.S. Bureau of Indian Affairs.” Annals of the Association of American Geographers 103:5. 1142-1159.
    • Raheja, Michelle. 2011. Reservation Reelism: Redfacing, Visual Sovereignty, and Representations of Native Americans in Film. Lincoln: University of Nebraska Press.
    • Simpson, Audra. 2014. Mohawk Interruptus: Political Life Across the Borders of Settler States. Durham: Duke University Press.
    • Travis, C. and A. von Lünen. 2016. The Digital Arts and Humanities. Basel, Switzerland: Springer.
    • Vizenor, Gerald. 2000. Fugitive Poses: Native American Indian Scenes of Absence and Presence. Lincoln: University of Nebraska Press.
    • Wolf, Patrick. 2006. “Settler Colonialism and the Elimination of the Native.” Journal of Genocide Research 8:4. 387-409.