boundary 2

Tag: public sphere

  • Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    by Zachary Loeb

    ~

    Speculative fiction is littered with fantastical tales warning of the dangers that arise when things get, to put it amusingly, too big. A researcher loses control of their experiment! A giant lizard menaces a city! Massive computer networks decide to wipe out humanity! A horrifying blob metastasizes as it incorporates all that it touches into its gelatinous self!

    Such stories generally contain at least a faint hint of the absurd. Nevertheless, silly stories can still contain important lessons, and some of the morals that one can pull from such tales are: that big things keep getting bigger, that big things can be very dangerous, and that sometimes things that get very big very fast wind up doing a fair amount of damage as what appears to be controlled growth is revealed to actually be far from managed. It may not necessarily always be a case of too big, as in size, but speculative fiction features no shortage of tragic characters who accidentally unleash some form of horror upon an unsuspecting populace because things were too big for that sorry individual to control. The mad scientist has a sad corollary in the figure of the earnest scientist who wails “I meant well” while watching their creation slip free from their grasp.

    Granted, if you want to see such a tale of the dangers of things getting too big and the desperate attempts to maintain some sort of control you don’t need to go looking for speculative fiction.

    You can just look at Facebook.

    With its publication of The Facebook Files, The Guardian has pried back the smiling façade of Zuckerberg’s monster to reveal a creature that an overwhelmed staff is desperately trying to contain with less than clear insight into how best to keep things under control. Parsing through a host of presentations and guidelines that are given to Facebook’s mysterious legion of content moderators, The Facebook Files provides insight into how the company determines what is and is not permitted on the website. It’s a tale that is littered with details about the desperate attempt to screen things that are being uploaded at a furious rate, with moderators often only having a matter of seconds in which they can make a decision as to whether or not something is permitted. It is a set of leaks that are definitely worth considering, as they provide an exposé of the guidelines Facebook moderators use when considering whether things truly qualify as revenge porn, child abuse, animal abuse, self-harm, unacceptable violence, and more. At the very least, the Facebook Files are yet another reminder of the continuing validity of Erich Fromm’s wise observation:

    What we use is not ours simply because we use it. (Fromm 2001, 225)

    In considering the Facebook Files it is worthwhile to recognize that the moderators are special figures in this story – they are not really the villains. The people working as actual Facebook moderators are likely not the same people who truly developed these guidelines. In truth, they likely weren’t even consulted. Furthermore, the moderators are almost certainly not the high-profile Facebook executives espousing techno-utopian ideologies in front of packed auditoriums. To put it plainly, Mark Zuckerberg is not checking to see if the thousands of photos being uploaded every second fit within the guidelines. In other words, having a measure of sympathy for the Facebook moderators who spend their days judging a mountain of (often disturbing) content is not the same thing as having any sympathy for Facebook (the company) or for its figureheads. Furthermore, Facebook has already automated a fair amount of the moderating process, and it is more than likely that Facebook would love to be able to ditch all of its human moderators in favor of an algorithm. Given the rate at which it expects them to work it seems that Facebook already thinks of its moderators as being little more than cogs in its vast apparatus.

    That last part helps point to one of the reasons why the Facebook Files are so interesting – because they provide a very revealing glimpse of the type of morality that a machine might be programmed to follow. The Facebook Files – indeed the very idea of Facebook moderators – is a massive hammer that smashes to bits the idea that technological systems are somehow neutral, for it puts into clear relief the ways in which people are involved in shaping the moral guidelines to which the technological system adheres. The case of what is and is not allowed on Facebook is a story playing out in real time of a company (staffed by real live humans) trying to structure the morality of a technological space. Even once all of this moderating is turned over to an algorithm, these Files will serve as a reminder that the system is acting in accordance with a set of values and views that were programmed into it by people. And this whole tale of Facebook’s attempts to moderate sensitive/disturbing content points to the fact that morality can often be quite tricky. And the truth of the matter, as many a trained ethicist will attest, is that moral matters are often rather complex – which is a challenge for Facebook as algorithms tend to do better with “yes” and “no” than they do with matters that devolve into a lot of complex philosophical argumentation.

    Thus, while a blanket “zero nudity” policy might be crude, prudish, and simplistic – it still represents a fairly easy way to separate allowed content from forbidden content. Similarly, a “zero violence” policy runs the risk of hiding the consequences of violence, masking the gruesome realities of war, and covering up a lot of important history – but it makes it easy to say “no videos of killings or self-harm are allowed at all.” Likewise, a strong “absolutely no threats of any sort policy” would make it so that “someone shoot [specific political figure” and “let’s beat up people with fedoras” would both be banned. By trying to parse these things Facebook has placed its moderators in tricky territory – and the guidelines it provides them with are not necessarily the clearest. Had Facebook maintained a strict “black and white” version of what’s permitted and not permitted it could have avoided the swamp through which it is now trudging with mixed results. Again, it is fair to have some measure of sympathy for the moderators here – they did not set the rules, but they will certainly be blamed, shamed, and likely fired for any failures to adhere to the letter of Facebook’s confusing law.

    Part of the problem that Facebook has to contend with is clearly the matter of free speech. There are certainly some who will cry foul at any attempt by Facebook to moderate content – crying out that such things are censorship. While still others will scoff at the idea of free speech as applied to Facebook seeing as it is a corporate platform and therefore all speech that takes place on the site already exists in a controlled space. A person may live in a country where they have a government protected right to free speech – but Facebook has no such obligation to its users. There is nothing preventing Facebook from radically changing its policies about what is permissible. If Facebook decided tomorrow that no content related to, for example, cookies was to be permitted, it could make and enforce that decision. And the company could make that decision regarding things much less absurd than cookies – if Facebook wanted to ban any content related to a given protest movement it would be within its rights to do so (which is not to say that would be good, but to say that it would be possible). In short, if you use Facebook you use it in accordance with its rules, the company does not particularly care what you think. And if you run afoul of one of its moderators you may well find your account suspended – you can cry “free speech” but Facebook will retort with “you agreed to our terms of use, Facebook is a private online space.” Here, though, a person may try to fire back at Facebook that in the 21st century, to a large extent, social media platforms like Facebook have become a sort of new public square.

    And, yet again, that is part of the reason why this is all so tricky.

    Facebook clearly wants to be the new “public square” – it wants to be the space where people debate politics, where candidates have forums, and where activists organize. Yet it wants all of these “public” affairs to take place within its own enclosed “private” space. There is no real democratic control of Facebook, the company may try to train its moderators to respect various local norms but the people from those localities don’t get to have a voice in determining what is and isn’t acceptable. Facebook is trying desperately to have it all ways – it wants to be the key space of the public sphere while simultaneously pushing back against any attempts to regulate it or subject it to increased public oversight. As lackluster and problematic as the guidelines revealed by the Facebook Files are, they still demonstrate that Facebook is trying (with mixed results) to regulate itself so that it can avoid being subject to further regulation. Thus, free speech is both a sword and a shield for Facebook – it allows the company to hide from the accusations that the site is filled with misogyny and xenophobia behind the shield of “free speech” even as the site can pull out its massive terms of service agreement (updated frequently) to slash users with the blade that on the social network there is no free speech only Facebook speech. The speech that Facebook is most concerned with is its own, and it will say and do what it needs to say and do, to protect itself from constraints.

    Yet, to bring it back to the points with which this piece began, many of the issues that the Facebook Files reveal have a lot to do with scale. Sorting out the nuance of an image or a video can take longer than the paltry few seconds most moderators are able to allot to each image/video. And it further seems that some of the judgments that Facebook is asking its moderators to make have less to do with morality or policies than they have to do with huge questions regarding how the moderator can possibly know if something is in accordance with the policies or not. How does a moderator not based in a community really know if something is up to a community’s standard? Facebook is hardly some niche site with a small user base and devoted cadre of moderators committed to keeping the peace – its moderators are overworked members of the cybertariat (a term borrowed from Ursula Huws), the community they serve is Facebook not those from whence the users hail. Furthermore, some of the more permissive policies – such as allowing images of animal abuse – couched under the premise that they may help to alert the authorities seems like more of an excuse than an admission of responsibility. Facebook has grown quite large, and it continues to grow. What it is experiencing is not so much a case of “growing pains” as it is a case of the pains that are inflicted on a society when something is allowed to grow out of control. Every week it seems that Facebook becomes more and more of a monopoly – but there seems to be little chance that it will be broken up (and it is unclear what that would mean or look like).

    Facebook is the science project of the researcher which is always about to get too big and slip out of control, and the Facebook Files reveal the company’s frantic attempt to keep the beast from throwing off its shackles altogether. And the danger there, from Facebook’s stance, is that – as in all works where something gets too big and gets out of control – the point when it loses control is the point where governments step in to try to restore order. What that would look like in this case is quite unclear, and while the point is not to romanticize regulation the Facebook Files help raise the question of who is currently doing the regulating and how are they doing it? That Facebook is having such a hard time moderating content on the site is actually a pretty convincing argument that when a site gets too big, the task of carefully moderating things becomes nearly impossible.

    To deny that Facebook has significant power and influence is to deny reality. While it’s true that Facebook can only set the policy for the fiefdoms it controls, it is worth recognizing that many people spend a heck of a lot of time ensconced within those fiefdoms. The Facebook Files are not exactly a shocking revelation showing a company that desperately needs some serious societal oversight – rather what is shocking about them is that they reveal that Facebook has been allowed to become so big and so powerful without any serious societal oversight. The Guardian’s article leading into the Facebook Files quotes Monika Bickert, ‎Facebook’s head of global policy management, as saying that Facebook is:

    “not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used.”

    But a question lingers as to whether or not these policies are really reflective of responsibility in any meaningful sense. Facebook may not be a “traditional” company in many respects, but one area in which it remains quite hitched to tradition is in holding to a value system where what matters most is the preservation of the corporate brand. To put it slightly differently, there are few things more “traditional” than the monopolistic vision of total technological control reified in Facebook’s every move. In his classic work on the politics of technology, The Whale and the Reactor, Langdon Winner emphasized the need to seriously consider the type of world that technological systems were helping to construct. As he put it:

    We should try to imagine and seek to build technical regimes compatible with freedom, social justice, and other key political ends…If it is clear that the social contract implicitly created by implementing a particular generic variety of technology is incompatible with the kind of society we deliberately choose—that is, if we are confronted with an inherently political technology of an unfriendly sort—then that kind of device or system ought to be excluded from society altogether. (Winner 1989, 55)

    The Facebook Files reveal the type of world that Facebook is working tirelessly to build. It is a world where Facebook is even larger and even more powerful – a world in which Facebook sets the rules and regulations. In which Facebook says “trust us” and people are expected to obediently go along.

    Yes, Facebook needs content moderators, but it also seems that it is long-past due for there to be people who moderate Facebook. And those people should not be cogs in the Facebook machine.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Fromm, Erich. 2001. The Fear of Freedom. London: Routledge Classics.
    • Winner, Langdon. 1989. The Whale and the Reactor. Chicago: The University of Chicago Press.
  • Gavin Mueller – Civil Disobedience in the Age of Cyberwar

    Gavin Mueller – Civil Disobedience in the Age of Cyberwar

    a review of Molly Sauter, The Coming Swarm: DDoS Actions, Hacktivism, and Civil Disobedience on the Internet (Bloomsbury Academic, 2014)

    by Gavin Mueller

    ~

    Molly Sauter’s The Coming Swarm begins in an odd way. Ethan Zuckerman, director of MIT’s Center for Civic Media, confesses in the book’s foreword that he disagrees with the book’s central argument: that distributed denial of service (DDoS) actions, where specific websites and/or internet servers are overwhelmed by traffic and knocked offline via the coordinated activity of many computers acting together, should be viewed as a legitimate means of protest.[1] “My research demonstrated that these attacks, once mounted by online extortionists as a form of digital protection racket, were increasingly being mounted by governments as a way of silencing critics,” Zuckerman writes (xii). Sauter’s argument, which takes the form of this slim and knotty book, ultimately does not convince Zuckerman, though he admits he is “a better scholar and a better person” for having engaged with the arguments contained within. “We value civic arguments, whether they unfold in the halls of government, a protest encampment, or the comments thread of an internet post because we believe in the power of deliberation” (xv). This promise of the liberal public sphere is what Sauter grapples with throughout the work, to varying levels of success.

    The Coming Swarm is not a book about DDoS activities in general. As Sauter notes, “DDoS is a popular tactic of extortion, harassment, and silencing” (6): its most common uses come from criminal organizations and government cyberwar operations. Sauter is not interested in these kinds of actions, which encompass the vast majority of DDoS uses. (DDoS itself is a subset of all denial of service or DoS attacks.) Instead they focus on self-consciously political DDoS attacks, first carried out by artist-hacker groups in the 1990s (the electrohippies and the Electronic Disturbance Theater) and more recent actions by the group Anonymous.[2] All told, these are a handful of actions, barely numbering in the double digits, and spread out over two decades. The focus on this small minority of cases can make the book’s argument seem question-begging, since Sauter does not make clear how and why it is legitimate to analyze exclusively those few instances of a widespread phenomenon that happen to conform to an author’s desired outlook. At one level, this is a general problem throughout the book, since Sauter’s analysis is confined to what they call “activist DDoS,” yet the actual meaning of this term is rarely interrogated: viewed from the perspective of the actors, many of the DDoS actions Sauter dismisses by stipulation could also be–and likely are–viewed as “activism.”

    From its earliest inception, political DDoS actions were likened to “virtual sit-ins”: activists use their computers’ ability to ping a server to clog up its functioning, potentially slowing or bringing its activity to a stand-still. This situated the technique within a history of nonviolent civil disobedience, particularly that of the Civil Rights Movement. This metaphor has tended to overdetermine the debate over the use of DDoS in activist contexts, and Sauter is keen to move on from the connection: “such comparisons on the part of the media and the public serve to only stifle innovation within social movements and political action, while at the same time cultivating a deep and unproductive nostalgia for a kind of ‘ideal activism’ that never existed” (22-3). Sauter argues that not only does this leave out contributions to the Civil Rights Movement that the mainstream finds less than respectable; it helps rule out the use of disruptive and destructive forms of activism in future movements.

    This argument has merit, and many activists who want to move beyond nonviolent civil disobedience into direct action forms of political action appear to agree with it. Yet Sauter still wants to claim the label of civil disobedience for DDoS actions that they at other moments discard: “activist DDoS actions are not meaningfully different from other actions within the history of civil disobedience… novelty cannot properly exempt activist DDoS from being classified as a tactic of civil disobedience” (27). However, the main criticisms of DDoS as civil disobedience have nothing to do with its novelty. As Evgeny Morozov points out in his defense of DDoS as a political tactic, “I’d argue, however, that the DDoS attacks launched by Anonymous were not acts of civil disobedience because they failed one crucial test implicit in Rawls’s account: Most attackers were not willing to accept the legal consequences of their actions.” Novelist and digital celebrity Cory Doctorow, who opposes DDoS-based activism, echoes this concern: “A sit-in derives its efficacy not from merely blocking the door to some objectionable place, but from the public willingness to stand before your neighbours and risk arrest and bodily harm in service of a moral cause, which is itself a force for moral suasion.” The complaint is not that DDoS fails to live up to the standards of the Civil Rights Movement, or that it is too novel. It is that it often fails the basic test of civil disobedience: potentially subjecting oneself to punishment as a form of protest that lays bare the workings of the state.

    Zuckerman’s principle critique of Sauter’s arguments is that DDoS, by shutting down sites, censors speech opposed by activists rather than promoting their dissenting messages. Sauter has a two-pronged response to this. First, they say that DDoS attacks make the important point that the internet is not really a public space. Instead, it is controlled by private interests, with large corporations managing the vast majority of online space. This means that no arguments may rest, implicitly or explicitly, on the assumption that the internet is a Habermasian public sphere. Second, Sauter argues, by their own admission counterintuitively, that DDoS, properly contextualized as part of “communicative capitalism,” is itself a form of speech.

    Communicative capitalism is a term developed by Jodi Dean as part of her critique of the Habermasian vision of the internet as a public sphere. With the commodification of online speech, “the exchange value of messages overtakes their use value” (58). The communication of messages is overwhelmed by the priority to circulate content of any kind: “communicative exchanges, rather than being fundamental to democratic politics, are the basic elements of capitalist production” (56). For Dean, this logic undermines political effects from internet communication: “The proliferation, distribution, acceleration and intensification of communicative access and opportunity, far from enhancing democratic governance or resistance, results in precisely the opposite – the post-political formation of communicative capitalism” (53). If, Sauter argues, circulation itself becomes the object of communication, the power of DDoS is to disrupt that circulation of context. “In that context the interruption of that signal becomes an equally powerful contribution…. Under communicative capitalism, it is possible that it is the intentional creation of disruptions and silence that is the most powerful contribution” (29).

    However, this move is contrary to the point of Dean’s concept; Dean specifically rejects the idea that any kind of communicative activity puts forth real political antagonism. Dean’s argument is, admittedly, an overreach. While capital cares little for the specificity of messages, human beings do: as Marx notes, exchange value cannot exist without a use value. Sauter’s own “counterintuitive” use of Dean points to a larger difficulty with Sauter’s argument: it remains wedded to a liberal understanding of political action grounded in the idea of a public sphere. Even when Sauter moves on to discussing DDoS as disruptive direct action, rather than civil disobedience, they return to the discursive tropes of the public sphere: DDoS is “an attempt to assert a fundamental view of the internet as a ‘public forum’ in the face of its attempted designation as ‘private property’” (45). Direct action is evaluated by its contribution to “public debate,” and Sauter even argues that DDoS actions during the 1999 Seattle WTO protests did not infringe on the “rights” of delegates to attend the event because they were totally ineffective. This overlooks the undemocratic and illiberal character of the WTO itself, whose meetings were held behind closed doors (one of the major rhetorical points of the protest), and it implies that the varieties of direct action that successfully blockaded meetings could be morally compromised. These kinds of actions, bereft of an easy classification as forms of speech or communication, are the forms of antagonistic political action Dean argues cannot be found in online space.

    In this light, it is worth returning to some of the earlier theorizations of DDoS actions. The earliest DDoS activists the electrohippies and Electronic Disturbance Theater documented the philosophies behind their work, and Rita Raley’s remarkable book Tactical Media presented a bracing theoretical synthesis of DDoS as an emergent critical art-activist practice. EDT’s most famous action deployed its FloodNet DDoS tool in pro-Zapatista protests. Its novel design incorporated something akin to speech acts: for example, it pinged servers belonging to the Mexican government with requests for “human rights,” leading to a return message “human rights not found on this server,” a kind of technopolitical pun. Yet Raley rejects a theorization of online political interventions strictly in terms of their communicative value. Rather they are a curious hybrid of artistic experiment and militant interrogation, a Deleuzian event where one endeavors “to act without knowing the situation into which one will be propelled, to change things as they exist” (26).

    The goal of EDT’s actions was not simply to have a message be heard, or even to garner media attention: as EDT’s umbrella organization the Critical Art Ensemble puts it in Electronic Civil Disobedience, “The indirect approach of media manipulation using a spectacle of disobedience designed to muster public sympathy and support is a losing proposition” (15). Instead, EDT took on the prerogatives of conceptual art — to use creative practice to pose questions and provoke response — in order to probe the contours of the emerging digital terrain and determine who would control it and how. That their experiments quickly raised the specter of terrorism, even in a pre-9/11 context, seemed to answer this. As Raley describes, drawing from RAND cyberwar researchers, DDoS and related tactics “shift the Internet ‘from the public sphere model and casts it more as conflicted territory bordering on a war zone.’” (44).

    While Sauter repeatedly criticizes treating DDoS actions as criminal, rather than political, acts, the EDT saw its work as both, and even analogous to terrorism. “Not that the activists are initiating terrorist practice, since no one dies in hyperreality, but the effect of this practice can have the  same consequence as terrorism, in that state and corporate power vectors will haphazardly return fire with weapons that have destructive material (and even mortal) consequences” (25). Indeed, civil disobedience is premised on exploiting the ambiguities of activities that can be considered both crime and politics. Rather than attempt to fix distinctions after the fact, EDT recognized the power of such actions precisely in collapsing these distinctions. EDT did criticize the overcriminalization of online activity, as does Sauter, whose analysis of the use of the Computer Fraud and Abuse Act to prosecute DDoS activities is some of the book’s strongest and most useful material.

    Sauter prefers the activities of Anonymous to the earlier actions by the electrohippies and EDT (although EDT co-founder Ricardo Dominguez has been up to his old tricks: he was investigated by the FBI and threatened with revocation of tenure for a “virtual sit-in” against the University of California system during the student occupations of 2010). This is because Anonymous’ actions, with their unpretentious lulzy ardor and open-source tools, “lower the barriers to entry” to activism (104): in other words, they leverage the internet’s capacity to increase participation. For Sauter, the value in Anonymous’ use of its DDoS tool, the Low Orbit Ion Cannon, against targets such as the MPAA and PayPal “lay in the media attention and new participants it attracted, who sympathized with Anonymous’ views and could participate in future actions” (115). The benefit of collaborative open-source development is similar, as is the tool’s feature that allows a user to contribute their computer to a “voluntary botnet” called the “FUCKING HIVE MIND” which “allows for the temporary sharing of an activist identity, which subsequently becomes more easily adopted by those participants who opt to remain involved” (130). This tip of the hat to theorists of participatory media once again reveals the notion of a democratic public sphere as a regulative ideal for the text.

    The price of all this participation is that a “lower level of commitment was required” (129) from activists, which is oddly put forth as a benefit. In fact, Sauter criticizes FloodNet’s instructions — “send your own message to the error log of the institution/symbol of Mexican Neo-Liberalism of your choice” — as relying upon “specialized language that creates a gulf between those who already understand it and those who do not” (112). Not only is it unclear to me what the specialized language in this case is (“neoliberalism” is a widely used, albeit not universally understood term), but it seems paramount that individuals opting to engage in risky political action should understand the causes for which they are putting themselves on the line. Expanding political participation is a laudable goal, but not at the expense of losing the content of politics. Furthermore, good activism requires training: several novice Anons were caught and prosecuted for participating in DDoS actions due to insufficient operational security measures.

    What would it mean to take seriously the idea that the internet is not, in fact, a public sphere, and that, furthermore, the liberal notion of discursive and communicative activities impacting the decisions of rational individuals does not, in fact, adequately describe contemporary politics? Sauter ends up in a compelling place, one akin to the earlier theorists of DDoS: war. After all, states are one of the major participants in DDoS, and Sauter documents how Britain’s Government Communications Headquarters (GCHQ) used Denial of Service attacks, even though deemed illegal, against Anonymous itself. The involvement of state actors “could portend the establishment of a semipermanent state of cyberwar” with activists rebranded as criminals and even terrorists. This is consonant with Raley’s analysis of EDT’s own forays into online space. It also recalls the radical political work of ultraleft formations such as Tiqqun (I had anticipated that The Coming Swarm was a reference to The Coming Insurrection though this does not seem to be the case), for whom war, specifically civil war, becomes the governing metaphor for antagonistic political practice under Empire.

    This would mean that the future of DDoS actions and other disruptive online activism would not be in its mobilization of speech, but in its building of capacities and organization of larger politicized formations. This could potentially be an opportunity to consider the varieties of DDoS so often bracketed away, which often rely on botnets and operate in undeniably criminal ways. Current hacker formations use these practices in political ways (Ghost Squad has recently targeted the U.S. military, cable news stations, the KKK and Black Lives Matters among others with DDoS, accompanying each action with political manifestos). While Sauter claims, no doubt correctly, that these activities are “damaging to [DDoS’s] perceived legitimacy as an activist tactic (160), they also note that measures to circumvent DDoS “continue to outstrip the capabilities of nearly all activist campaigns” (159). If DDoS has a future as a political tactic, it may be in the zones beyond what liberal political theory can touch.

    Notes

    [1] Instances of DDoS are typically referred to in both the popular press and by hacktivsts as “attacks.” Sauter prefers the term “actions,” a usage I follow here.

    [2] I follow Sauter’s preferred usage of the pronouns “they” and “them.”

    Works Cited

    • Critical Art Ensemble. 1996. Electronic Civil Disobedience. Brooklyn: Autonomedia.
    • Dean, Jodi. 2005. “Communicative Capitalism: Circulation and the Foreclosure of Politics.” Cultural Politics 1.1. 51-74.
    • Raley, Rita. 2009. Tactical Media. Minneapolis: University of Minnesota Press.
    • Sauter, Molly. 2014. The Coming Swarm: DDoS Actions, Hacktivism, and Civil Disobedience on the Internet. New York: Bloomsbury Academic.

    _____

    Gavin Mueller (@gavinsaywhat) holds a Ph.D. in Cultural Studies from George Mason University. He is currently a Visiting Assistant Professor of Emerging Media and Communication at the University of Texas-Dallas. He previously reviewed Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous for The b2 Review.

    Back to the essay