boundary 2

Tag: social media

  • Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby — Irony and Redundancy: The Alt Right, Media Manipulation, and German Idealism

    Leif Weatherby

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    Take three minutes to watch this clip from a rally in New York City just after the 2016 presidential election.[i] In the impromptu interview, we learn that Donald Trump is going to “raise the ancient city of Thule” and “complete the system of German Idealism.” In what follows, I’m going to interpret what the troll in the video—known only by his twitter handle, @kantbot2000—is doing here. It involves Donald Trump, German Idealism, metaphysics, social media, and above all irony. It’s a diagnosis of the current relationship between mediated speech and politics. I’ll come back to Kantbot presently, but first I want to lay the scene he’s intervening in.

    A small but deeply networked group of self-identifying trolls and content-producers has used the apparently unlikely rubric of German philosophy to diagnose our media-rhetorical situation. There’s less talk of trolls now than there was in 2017, but that doesn’t mean they’re gone.[ii] Take the recent self-introductory op-ed by Brazil’s incoming foreign minister, Ernesto Araùjo, which bizarrely accuses Ludwig Wittgenstein of undermining the nationalist identity of Brazilians (and everyone else). YouTube remains the global channel of this Alt Right[iii] media game, as Andre Pagliarini has documented: one Olavo de Carvalho, whose channel is dedicated to the peculiar philosophical obsessions of the global Alt Right, is probably responsible for this foreign minister taking the position, apparently intended as policy, “I don’t like Wittgenstein,” and possibly for his appointment in the first place. The intellectuals playing this game hold that Marxist and postmodern theory caused the political world to take its present shape, and argue that a wide variety of theoretical tools should be reappropriated to the Alt Right. This situation presents a challenge to the intellectual Left on both epistemological and political grounds.

    The core claim of this group—one I think we should take seriously—is that mediated speech is essential to politics. In a way, this claim is self-fulfilling. Araùjo, for example, imagines that Wittgenstein’s alleged relativism is politically efficacious; Wittgenstein arrives pre-packaged by the YouTube phenomenon Carvalho; Araùjo’s very appointment seems to have been the result of Carvalho’s influence. That this tight ideological loop should realize itself by means of social media is not surprising. But in our shockingly naïve public political discussions—at least in the US—emphasis on the constitutive role of rhetoric and theory appears singular. I’m going to argue that a crucial element of this scene is a new tone and practice of irony that permeates the political. This political irony is an artefact of 2016, most directly, but it lurks quite clearly beneath our politics today. And to be clear, the self-styled irony of this group is never at odds with a wide variety of deeply held, and usually vile, beliefs. This is because irony and seriousness are not, and have never been, mutually exclusive. The idea that the two cannot cohabit is one of the more obvious weak points of our attempt to get an analytical foothold on the global Alt Right—to do so, we must traverse the den of irony.

    Irony has always been a difficult concept, slippery to the point of being undefinable. It usually means something like “when the actual meaning is the complete opposite from the literal meaning,” as Ethan Hawke tells Wynona Ryder in 1994’s Reality Bites. Ryder’s plaint, “I know it when I see it” points to just how many questions this definition raises. What counts as a “complete opposite”? What is the channel—rhetorical, physical, or otherwise—by which this dual expression can occur? What does it mean that what we express can contain not only implicit or connotative content, but can in fact make our speech contradict itself to some communicative effect? And for our purposes, what does it mean when this type of question embeds itself in political communication?

    Virtually every major treatment of irony since antiquity—from Aristotle to Paul de Man—acknowledges these difficulties. Quintilian gives us the standard definition: that the meaning of a statement is in contradiction to what it literally extends to its listener. But he still equivocates about its source:

    eo vero genere, quo contraria ostenduntur, ironia est; illusionem vocant. quae aut pronuntiatione intelligitur aut persona aut rei nature; nam, si qua earum verbis dissentit, apparet diversam esse orationi voluntatem. Quanquam id plurimis id tropis accidit, ut intersit, quid de quoque dicatur, quia quoddicitur alibi verum est.

    On the other hand, that class of allegory in which the meaning is contrary to that suggested by the words, involve an element of irony, or, as our rhetoricians call it, illusio. This is made evident to the understanding either by the delivery, the character of the speaker or the nature of the subject. For if any one of these three is out of keeping with the words, it at once becomes clear that the intention of the speaker is other than what he actually says. In the majority of tropes it is, however, important to bear in mind not merely what is said, but about whom it is said, since what is said may in another context be literally true. (Quintilian 1920, book VIII, section 6, 53-55)

    Speaker, ideation, context, addressee—all of these are potential sources for the contradiction. In other words, irony is not limited to the intentional use of contradiction, to a wit deploying irony to produce an effect. Irony slips out of precise definition even in the version that held sway for more than a millennium in the Western tradition.

    I’m going to argue in what follows that irony of a specific kind has re-opened what seemed a closed channel between speech and politics. Certain functions of digital, and specifically social, media enable this kind of irony, because the very notion of a digital “code” entailed a kind of material irony to begin with. This type of irony can be manipulated, but also exceeds anyone’s intention, and can be activated accidentally (this part of the theory of irony comes from the German Romantic Friedrich Schlegel, as we will see). It not only amplifies messages, but does so by resignifying, exploiting certain capacities of social media. Donald Trump is the master practitioner of this irony, and Kantbot, I’ll propose, is its media theorist. With this irony, political communication has exited the neoliberal speech regime; the question is how the Left responds.

    i. “Donald Trump Will Complete the System of German Idealism”

    Let’s return to our video. Kantbot is trolling—hard. There’s obvious irony in the claim that Trump will “complete the system of German Idealism,” the philosophical network that began with Immanuel Kant’s Critique of Pure Reason (1781) and ended (at least on Kantbot’s account) only in the 1840s with Friedrich Schelling’s philosophy of mythology. Kant is best known for having cut a middle path between empiricism and rationalism. He argued that our knowledge is spontaneous and autonomous, not derived from what we observe but combined with that observation and molded into a nature that is distinctly ours, a nature to which we “give the law,” set off from a world of “things in themselves” about which we can never know anything. This philosophy touched off what G.W.F. Hegel called a “revolution,” one that extended to every area of human knowledge and activity. History itself, Hegel would famously claim, was the forward march of spirit, or Geist, the logical unfolding of self-differentiating concepts that constituted nature, history, and institutions (including the state). Schelling, Hegel’s one-time roommate, had deep reservations about this triumphalist narrative, reserving a place for the irrational, the unseen, the mythological, in the process of history. Hegel, according to a legend propagated by his students, finished his 1807 Phenomenology of Spirit while listening to the guns of the battle of Auerstedt-Jena, where Napoleon defeated the Germans and brought a final end to the Holy Roman Empire. Hegel saw himself as the philosopher of Napoleon’s moment, at least in 1807; Kantbot sees himself as the Hegel to Donald Trump (more on this below).

    Rumor has it that Kantbot is an accountant in NYC, although no one has been able to doxx him yet. His twitter has more than 26,000 followers at the time of writing. This modest fame is complemented by a deep lateral network among the biggest stars on the Far Right. To my eye he has made little progress in gaining fame—but also in developing his theory, on which he has recently promised a book “soon”—in the last year. Conservative media reported that he was interviewed by the FBI in 2018. His newest line of thought involves “hate hoaxes” and questioning why he can’t say the n-word—a regression to platitudes of the extremist Right that have been around for decades, as David Neiwert has extensively documented (Neiwert 2017). Sprinkled between these are exterminationist fantasies—about “Spinozists.” He toggles between conspiracy, especially of the false-flag variety, hate-speech-flirtation, and analysis. He has recently started a podcast. The whole presentation is saturated in irony and deadly serious:

    Asked how he identifies politically, Kantbot recently claimed to be a “Stalinist, a TERF, and a Black Nationalist.” Mike Cernovich, the Alt Right leader who runs the website Danger and Play, has been known to ask Kantbot for advice. There is also an indirect connection between Kantbot and “Neoreaction” or NRx, a brand of “accelerationism” which itself is only blurrily constituted by the blog-work of Curtis Yarvin, aka Mencius Moldbug and enthusiasm for the philosophy of Nick Land (another reader of Kant). Kantbot also “debated” White Nationalist thought leader Richard Spencer, presenting the spectacle of Spencer, who wrote a Masters thesis on Adorno’s interpretation of Wagner, listening thoughtfully to Kantbot’s explanation of Kant’s rejection of Johann Gottfried Herder, rather than the body count, as the reason to reject Marxism.

    When conservative pundit Ann Coulter got into a twitter feud with Delta over a seat reassignment, Kantbot came to her defense. She retweeted the captioned image below, which was then featured on Breitbart News in an article called “Zuckerberg 2020 Would be a Dream Come True for Republicans.”

    Kantbot’s partner-in-crime, @logo-daedalus (the very young guy in the maroon hat in the video) has recently jumped on a minor fresh wave of ironist political memeing in support of UBI-focused presidential candidate, Andrew Yang – #yanggang. He was once asked by Cernovich if he had read Michael Walsh’s book, The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West:

    The autodidact intellectualism of this Alt Right dynamic duo—Kantbot and Logodaedalus—illustrates several roles irony plays in the relationship between media and politics. Kantbot and Logodaedalus see themselves as the avant-garde of a counterculture on the brink of a civilizational shift, participating in the sudden proliferation of “decline of the West” narratives. They alternate targets on Twitter, and think of themselves as “producers of content” above all. To produce content, according to them, is to produce ideology. Kantbot is singularly obsessed the period between about 1770 and 1830 in Germany. He thinks of this period as the source of all subsequent intellectual endeavor, the only period of real philosophy—a thesis he shares with Slavoj Žižek (Žižek 1993).

    This notion has been treated monographically by Eckart Förster in The Twenty-Five Years of Philosophy, a book Kantbot listed in May of 2017 under “current investigations.” His twist on the thesis is that German Idealism is saturated in a form of irony. German Idealism never makes culture political as such. Politics comes from a culture that’s more capacious than any politics, so any relation between the two is refracted by a deep difference that appears, when they are brought together, as irony. Marxism, and all that proceeds from Marxism, including contemporary Leftism, is a deviation from this path.


    This reading of German Idealism is a search for the metaphysical origins of a common conspiracy theory in the Breitbart wing of the Right called “cultural Marxism” (the idea predates Breibart: see Jay 2011; Huyssen 2017; Berkowitz 2003. Walsh’s 2017 The Devil’s Pleasure Palace, which LogoDaedalus mocked to Cernovich, is one of the touchstones of this theory). Breitbart’s own account states that there is a relatively straight line from Hegel’s celebration of the state to Marx’s communism to Woodrow Wilson’s and Franklin Delano Roosevelt’s communitarianism—and on to critical theory of Theodor W. Adorno and Herbert Marcuse (this is the actual “cultural Marxism,” one supposes), Saul Alinsky’s community organizing, and (surprise!) Barack Obama’s as well (Breitbart 2011, 105-37). The phrase “cultural Marxism” is a play on the Nazi phrase “cultural Bolshevism,” a conspiracy theory that targeted Jews as alleged spies and collaborators of Stalin’s Russia. The anti-Semitism is only slightly more concealed in the updated version. The idea is that Adorno and Marcuse took control of the cultural matrix of the United States and made the country “culturally communist.” In this theory, individual freedom is always second to an oppressive community in the contemporary US. Between Breitbart’s adoption of critical theory and NRx (see Haider 2017; Beckett 2017; Noys 2014)—not to mention the global expansion of this family of theories by figures like Carvalho—it’s clear that the “Alt Right” is a theory-deep assemblage. The theory is never just analysis, though. It’s always a question of intervention, or media manipulation (see Marwick and Lewis 2017).

    Breitbart himself liked to capture this blend in his slogan “politics is downstream from culture.” Breitbart’s news organization implicitly cedes the theoretical point to Adorno and Marcuse, trying to build cultural hegemony in the online era. Reform the cultural, dominate the politics—all on the basis of narrative and media manipulation. For the Alt Right, politics isn’t “online” or “not,” but will always be both.

    In mid-August of 2017, a flap in the National Security Council was caused by a memo, probably penned by staffer Rich Higgins (who reportedly has ties to Cernovich), that appeared to accuse then National Security Adviser, H. R. McMaster, of supporting or at least tolerating Cultural Marxism’s attempt to undermine Trump through narrative (see Winter and Groll 2017). Higgins and other staffers associated with the memo were fired, a fact which Trump learned from Sean Hannity and which made him “furious.” The memo, about which the president “gushed,” defines “the successful outcome of cultural Marxism [as] a bureaucratic state beholden to no one, certainly not the American people. With no rule of law considerations outside those that further deep state power, the deep state truly becomes, as Hegel advocated, god bestriding the earth” (Higgins 2017). Hegel defined the state as the goal of all social activity, the highest form of human institution or “objective spirit.” Years later, it is still Trump vs. the state, in its belated thrall to Adorno, Marcuse, and (somehow) Hegel. Politics is downstream from German Idealism.

    Kantbot’s aspiration was to expand and deepen the theory of this kind of critical manipulation of the media—but he wants to rehabilitate Hegel. In Kantbot’s work we begin to glimpse how irony plays a role in this manipulation. Irony is play with the very possibility of signification in the first place. Inflected through digital media—code and platform—it becomes not just play but its own expression of the interface between culture and politics, overlapping with one of the driving questions of the German cultural renaissance around 1800. Kantbot, in other words, diagnosed and (at least at one time) aspired to practice a particularly sophisticated combination of rhetorical and media theory as political speech in social media.

    Consider this tweet:



    After an innocuous webcomic frog became infamous in 2016, after the Clinton campaign denounced its use and the Anti-Defamation League took the extraordinary step of adding the meme to its Hate Database, Pepe the Frog gained a kind of cult status. Kantbot’s reading of the phenomenon is that the “point is demonstration of power to control meaning of sign in modern media environment.” If this sounds like French Theory, then one “Johannes Schmitt” (whose profile thumbnail appears to be an SS officer) agrees. “Starting to sound like Derrida,” he wrote. To which Kantbot responds, momentously: “*schiller.”



    The asterisk-correction contains multitudes. Kantbot is only too happy to jettison the “theory,” but insists that the manipulation of the sign in its relation to the media environment maintains and alters the balance between culture and politics. Friedrich Schiller, whose classical aesthetic theory claims just this, is a recurrent figure for Kantbot. The idea, it appears, is to create a culture that is beyond politics and from which politics can be downstream. To that end, Kantbot opened his own online venue, the “Autistic Mercury,” named after Der teutsche Merkur, one of the German Enlightenment’s central organs.[iv] For Schiller, there was a “play drive” that mediated between “form” and “content” drives. It preserved the autonomy of art and culture and had the potential to transform the political space, but only indirectly. Kantbot wants to imitate the composite culture of the era of Kant, Schiller, and Hegel—just as they built their classicism on Johann Winckelmann’s famous doctrine that an autonomous and inimitable culture must be built on imitation of the Greeks. Schiller was suggesting that art could prevent another post-revolutionary Terror like the one that had engulfed France. Kantbot is suggesting that the metaphysics of communication—signs as both rhetoric and mediation—could resurrect a cultural vitality that got lost somewhere along the path from Marx to the present. Donald Trump is the instrument of that transformation, but its full expression requires more than DC politics. It requires (online) culture of the kind the campaign unleashed but the presidency has done little more than to maintain. (Kantbot uses Schiller for his media analysis too, as we will see.) Spencer and Kanbot agreed during their “debate” that perhaps Trump had done enough before he was president to justify the disappointing outcomes of his actual presidency. Conservative policy-making earns little more than scorn from this crowd, if it is detached from the putative real work of building the Alt Right avant-garde.



    According to one commenter on YouTube, Kantbot is “the troll philosopher of the kek era.” Kek is the god of the trolls. His name is based on a transposition of the letters LOL in the massively-multiplayer online role-playing game World of Warcraft. “KEK” is what the enemy sees when you laugh out loud to someone on your team, in an intuitively crackable code that was made into an idol to worship. Kek—a half-fake demi-God—illustrates the balance between irony and ontology in the rhetorical media practice known as trolling.


    The name of the idol, it turned out, was also the name of an actual ancient Egyptian demi-god (KEK), a phenomenon that confirmed his divine status, in an example of so-called “meme magic.” Meme magic is when—often by praying to KEK or relying on a numerological system based on the random numbers assigned to users of 4Chan and other message boards—something that exists only online manifests IRL, “in real life” (Burton 2016). Examples include Hillary Clinton’s illness in the late stages of the campaign (widely and falsely rumored—e.g. by Cernovich—before a real yet minor illness was confirmed), and of course Donald Trump’s actual election. Meme magic is everywhere: it names the channel between online and offline.

    Meme magic is both drenched in irony and deeply ontological. What is meant is just “for the lulz,” while what is said is magic. This is irony of the rhetorical kind—right up until it works. The case in point is the election, where the result, and whether the trolls helped, hovers between reality and magic. First there is meme generation, usually playfully ironic. Something happens that resembles the meme. Then the irony is retroactively assigned a magical function. But statements about meme magic are themselves ironic. They use the contradiction between reality and rhetoric (between Clinton’s predicted illness and her actual pneumonia) as the generator of a second-order irony (the claim that Trump’s election was caused by memes is itself a meme). It’s tempting to see this just as a juvenile game, but we shouldn’t dismiss the way the irony scales between the different levels of content-production and interpretation. Irony is rhetorical and ontological at once. We shouldn’t believe in meme magic, but we should take this recursive ironizing function very seriously indeed. It is this kind of irony that Kantbot diagnoses in Trump’s manipulation of the media.

    ii. Coding Irony: Friedrich Schlegel, Claude Shannon, and Twitter

    The ongoing inability of the international press to cover Donald Trump in a way that measures the impact of his statements rather than their content stems from this use of irony. We’ve gotten used to fake news and hyperbolic tweets—so used to these that we’re missing the irony that’s built in. Every time Trump denies something about collusion or says something about the coal industry that’s patently false, he’s exploiting the difference between two sets of truth-valuations that conflict with one another (e.g. racism and pacifism). That splits his audience—something that the splitting of the message in irony allows—and works both to fight his “enemies” and to build solidarity in his base. Trump has changed the media’s overall expression, making not his statements but the very relation between content and platform ironic. This objective form of media irony is not to be confused with “wit.” Donald Trump is not “witty.” He is, however, a master of irony as a tool for manipulation built into the way digital media allow signification to occur. He is the master of an expanded sense of irony that runs throughout the history of its theory.

    When White Nationalists descended on Charlottesville, Virginia, on August 11, 2017, leading to the death of one counter-protester the next day, Trump dragged his feet in naming “racism.” He did, eventually, condemn the groups by name—prefacing his statements with a short consideration of the economy, a dog-whistle about what comes first (actually racism, for which “economy” has become an erstwhile cipher). In the interim, however, his condemnations of violence “as such” led Spencer to tweet this:

    Of course, two days later, Trump would explicitly blame the “Alt Left” for violence it did not commit. Before that, however, Spencer’s irony here relied on Trump’s previous—malicious—irony. By condemning “all” violence when only one kind of violence was at issue, Trump was attempting to split the signal of his speech. The idea was to let the racists know that they could continue through condemnation of their actions that pays lip service to the non-violent ideal of the liberal media. Spencer gleefully used the internal contradiction of Trump’s speech, calling attention to the side of the message that was supposed to be “hidden.” Even the apparently non-ironic condemnation of “both sides” exploited a contradiction not in the statement itself, but in the way it is interpreted by different outlets and political communities. Trump’s invocation of the “Alt Left” confirmed the suspicions of those on the Right, panics the Center, and all but forced the Left to adopt the term. The filter bubbles, meanwhile, allowed this single message to deliver contradictory meanings on different news sites—one reason headlines across the political spectrum are often identical as statements, but opposite in patent intent. Making the dog whistle audible, however, doesn’t spell the “end of the ironic Nazi,” as Brian Feldman commented (Feldman 2017). It just means that the irony isn’t opposed to but instead part of the politics. Today this form of irony is enabled and constituted by digital media, and it’s not going away. It forms an irreducible part of the new political situation, one that we ignore or deny at our own peril.

    Irony isn’t just intentional wit, in other words—as Quintilian already knew. One reason we nevertheless tend to confuse wit and irony is that the expansion of irony beyond the realm of rhetoric—usually dated to Romanticism, which also falls into Kantbot’s period of obsession—made irony into a category of psychology and style. Most treatments of irony take this as an assumption: modern life is drenched in the stuff, so it isn’t “just” a trope (Behler 1990). But it is a feeling, one that you get from Weird Twitter but also from the constant stream of Facebooks announcements about leaving Facebook. Quintilian already points the way beyond this gestural understanding. The problem is the source of the contradiction. It is not obvious what allows for contradiction, where it can occur, what conditions satisfy it, and thus form the basis for irony. If the source is dynamic, unstable, then the concept of irony, as Paul de Man pointed out long ago, is not really a concept at all (de Man 1996).

    The theoretician of irony who most squarely accounts for its embeddedness in material and media conditions is Friedrich Schlegel. In nearly all cases, Schlegel writes, irony serves to reinforce or sharpen some message by means of the reflexivity of language: by contradicting the point, it calls it that much more vividly to mind. (Remember when Trump said, in the 2016 debates, that he refused to invoke Bill Clinton’s sexual history for Chelsea’s sake?) But there is another, more curious type:

    The first and most distinguished [kind of irony] of all is coarse irony; to be found most often in the actual nature of things and which is one of its most generally distributed substances [in der wirklichen Natur der Dinge und ist einer ihrer allgemein verbreitetsten Stoffe]; it is most at home in the history of humanity (Schlegel 1958-, 368).





    In other words, irony is not merely the drawing of attention to formal or material conditions of the situation of communication, but also a widely distributed “substance” or capacity in material. Twitter irony finds this substance in the platform and its underlying code, as we will see. If irony is both material and rhetorical, this means that its use is an activation of a potential in the interface between meaning and matter. This could allow, in principle, an intervention into the conditions of signification. In this sense, irony is the rhetorical term for what we could call coding, the tailoring of language to channels in technologies of transmission. Twitter reproduces an irony that built into any attempt to code language, as we are about to see. And it’s the overlap of code, irony, and politics that Kantbot marshals Hegel to address.

    Coded irony—irony that is both rhetorical and digitally enabled—exploded onto the political scene in 2016 through Twitter. Twitter was the medium through which the political element of the messageboards has broken through (not least because of Trump’s nearly 60 million followers, even if nearly half of them are bots). It is far from the only politicized social medium, as a growing literature is describing (Philips and Milner, 2017; Phillips 2016; Milner 2016; Goerzen 2017). But it has been a primary site of the intimacy of media and politics over the course of 2016 and 2017, and I think that has something to do with twitter itself, and with the relationship between encoded communications and irony.

    Take this retweet, which captures a great deal about Twitter:

    “Kim Kierkegaardashian,” or @KimKierkegaard, joined twitter in June 2012 and has about 259,00 followers at the time of writing. The account mashes up Kardashian’s self- and brand-sales oriented tweet style with the proto-existentialism of Søren Kierkegaard. Take, for example, an early tweet from 8 July, 2012: “I have majorly fallen off my workout-eating plan! AND it’s summer! But to despair over sin is to sink deeper into it.” The account sticks close to Kardashian’s actual tweets and Kierkegaard’s actual words. In the tweet above, from April 2017, @KimKierkegaard has retweeted Kardashian herself incidentally formulating one of Kierkegaard’s central ideas in the proprietary language of social media. “Omg” as shorthand takes the already nearly entirely secular phrase “oh my god” and collapses any trace of transcendence. The retweet therefore returns us to the opposite extreme, in which anxiety points us to the finitude of human existence in Kierkegaard. If we know how to read this, it is a performance of that other Kierkegaardian bellwether, irony.

    If you were to encounter Kardashian’s tweet without the retweet, there would be no irony at all. In the retweet, the tweet is presented as an object and resignified as its opposite. Note that this is a two-way street: until November 2009, there were no retweets. Before then, one had to type “RT” and then paste the original tweet in. Twitter responded, piloting a button that allows the re-presentation of a tweet (Stone 2009). This has vastly contributed to the sense of irony, since the speaker is also split between two sources, such that many accounts have some version of “RTs not endorsements” in their description. Perhaps political scandal is so often attached to RTs because the source as well as the content can be construed in multiple different and often contradictory ways. Schlegel would have noted that this is a case where irony swallows the speaker’s authority over it. That situation was forced into the code by the speech, not the other way around.

    I’d like to call the retweet a resignificatory device, distinct from amplificatory. Amplificatory signaling cannibalizes a bit of redundancy in the algorithm: the more times your video has been seen on YouTube, the more likely it is to be recommended (although the story is more complicated than that). Retweets certainly amplify the original message, but they also reproduce it under another name. They have the ability to resignify—as the “repost” function on Facebook also does, to some extent.[v] Resignificatory signaling takes the unequivocal messages at the heart of the very notion of “code” and makes them rhetorical, while retaining their visual identity. Of course, no message is without an effect on its receiver—a point that information theory made long ago. But the apparent physical identity of the tweet and the retweet forces the rhetorical aspect of the message to the fore. In doing so, it draws explicit attention to the deep irony embedded in encoded messages of any kind.

    Twitter was originally written in the object-oriented programming language and module-view-controller (MVC) framework Ruby on Rails, and the code matters. Object-oriented languages allow any term to be treated either as an object or as an expression, making Shannon’s observations on language operational.[vi] The retweet is an embedding of this ability to switch any term between these two basic functions. We can do this in language, of course (that’s why object-oriented languages are useful). But when the retweet is presented not as copy-pasted but as a visual reproduction of the original tweet, the expressive nature of the original tweet is made an object, imitating the capacity of the coding language. In other words, Twitter has come to incorporate the object-oriented logic of its programming language in its capacity to signify. At the level of speech, anything can be an object on Twitter—on your phone, you literally touch it and it presents itself. Most things can be resignified through one more touch, and if not they can be screencapped and retweeted (for example, the number of followers one has, a since-deleted tweet, etc.). Once something has come to signify in the medium, it can be infinitely resignified.

    When, as in a retweet, an expression is made into an object of another expression, its meaning is altered. This is because its source is altered. A statement of any kind requires the notion that someone has made that statement. This means that a retweet, by making an expression into an object, exemplifies the contradiction between subject and object—the very contradiction on which Kant had based his revolutionary philosophy. Twitter is fitted, and has been throughout its existence retrofitted, to generalize this speech situation. It is the platform of the subject-object dialectic, as Hegel might have put it. By presenting subject and object in a single statement—the retweet as expression and object all at once—Twitter embodies what rhetorical theory has called irony since the ancients. It is irony as code. This irony resignifies and amplifies the rhetorical irony of the dog whistle, the troll, the President.

    Coding is an encounter between two sets of material conditions: the structure of a language, and the capacity of a channel. This was captured in truly general form for the first time in Claude Shannon’s famous 1948 paper, “A Mathematical Theory of Communication,” in which the following diagram is given:

    Shannon’s achievement was a general formula for the relation between the structure of the source and the noise in the channel.[vii] If the set of symbols can be fitted to signals complex or articulated enough to arrive through the noise, then nearly frictionless communication could be engineered. The source—his preferred example was written English—had a structure that limited its “entropy.” If you’re looking at one letter in English, for example, and you have to guess what the next one will be, you theoretically have 26 choices (including a space). But the likelihood, if the letter you’re looking at is, for example, “q,” that the next letter will be “u” is very high. The likelihood for “x” is extremely low. The higher likelihood is called “redundancy,” a limitation on the absolute measure of chaos, or entropy, that the number of elements imposes. No source for communication can be entirely random, because without patterns of one kind or another we can’t recognize what’s being communicated.[viii]

    We tend to confuse entropy and the noise in the channel, and it is crucial to see that they are not the same thing. The channel is noisy, while the source is entropic. There is, of course, entropy in the channel—everything is subject to the second law of thermodynamics, without exception. But “entropy” is not in any way comparable to noise in Shannon, because “entropy” is a way of describing the conditional restraints on any structured source for communication, like the English language, the set of ideas in the brain, or what have you. Entropy is a way to describe the opposite of redundancy in the source, it expresses probability rather than the slow disintegration, the “heat death,” with which it is usually associated.[ix] If redundancy = 1, we have a kind of absolute rule or pure pattern. Redundancy works syntactically, too: “then” or “there” after the phrase “see you” is a high-level redundancy that is coded into SMS services.

    This is what Shannon calls a “conditional restraint” on the theoretical absolute entropy (based on number of total parts), or freedom in choosing a message. It is also the basis for autocorrect technologies, which obviously have semantic effects, as the genre of autocorrect bloopers demonstrates.

    A large portion of Shannon’s paper is taken up with calculating the redundancy of written English, which he determines to be nearly 50%, meaning that half the letters can be removed from most sentences or distorted without disturbing our ability to understand them.[x]

    The general process of coding, by Shannon’s lights, is a manipulation of the relationship between the structure of the source and the capacity of the channel as a dynamic interaction between two sets of evolving rules. Shannon’s statement that the “semantic aspects” of messages were “irrelevant to the engineering problem” has often been taken to mean he played fast and loose with the concept of language (see Hayles 1999; but see also Liu 2010; and for the complex history of Shannon’s reception Floridi 2010). But rarely does anyone ask exactly what Shannon did mean, or at least conceptually sketch out, in his approach to language. It’s worth pointing to the crucial role that source-structure redundancy plays in his theory, since it cuts close to Schlegel’s notion of material irony.

    Neither the source nor the channel is static. The scene of coding is open to restructuring at both ends. English is evolving; even its statistical structure changes over time. The channels, and the codes use to fit source to them, are evolving too. There is no guarantee that integrated circuits will remain the hardware of the future. They did not yet exist when Shannon published his theory.

    This point can be hard to see in today’s world, where we encounter opaque packets of already-established code at every turn. It would have been less hard to see for Shannon and those who followed him, since nothing was standardized, let alone commercialized, in 1948. But no amount of stack accretion can change the fact that mediated communication rests on the dynamic relation between relative entropy in the source and the way the channel is built.

    Redundancy points to this dynamic by its very nature. If there is absolute redundancy, nothing is communicated, because we already know the message with 100% certainty. With no redundancy, no message arrives at all. In between these two extremes, messages are internally objectified or doubled, but differ slightly from one another, in order to be communicable. In other words, every interpretable signal is a retweet. Redundancy, which stabilizes communicability by providing pattern, also ensures that the rules are dynamic. There is no fully redundant message. Every message is between 0 and 1, and this is what allows it to function as expression or object. Twitter imitates the rules of source structure, showing that communication is the locale where formal and material constraints encounter one another. It illustrates this principle of communication by programming it into the platform as a foundational principle. Twitter exemplifies the dynamic situation of coding as Shannon defined it. Signification is resignification.

    If rhetoric is embedded this deeply into the very notion of code, then it must possess the capacity to change the situation of communication, as Schlegel suggested. But it cannot do this by fiat or by meme magic. The retweeted “this anxiety omg” hardly stands to change the statistical structure of English much. It can, however, point to the dynamic material condition of mediated signification in general, something Warren Weaver, who wrote a popularizing introduction to Shannon’s work, acknowledged:

    anyone would agree that the probability is low for such a sequence of words as “Constantinople fishing nasty pink.” Incidentally, it is low, but not zero; for it is perfectly possible to think of a passage in which one sentence closes with “Constantinople fishing,” and the next begins with “Nasty pink.” And we might observe in passing that the unlikely four-word sequence under discussion has occurred in a single good English sentence, namely the one above. (Shannon and Weaver 1964, 11)

    There is no further reflection in Weaver’s essay on this passage, but then, that is the nature of irony. By including the phrase “Constantinople fishing nasty pink” in the English language, Weaver has shifted its entropic structure, however slightly. This shift is marginal to our ability to communicate (I am amplifying it very slightly right now, as all speech acts do), but some shifts are larger-scale, like the introduction of a word or concept, or the rise of a system of notions that orient individuals and communities (ideology). These shifts always have the characteristic that Weaver points to here, which is that they double as expressions and objects. This doubling is a kind of generalized redundancy—or capacity for irony—built into semiotic systems, material irony flashing up into the rhetorical irony it enables. That is a Romantic notion enshrined in a founding document of the digital age.

    Now we can see one reason that retweeting is often the source of scandal. A retweet or repetition of content ramifies the original redundancy of the message and fragments the message’s effect. This is not to say it undermines that effect. Instead, it uses the redundancy in the source and the noise in the channel to split the message according to any one of the factors that Quintilian announced: speaker, audience, context. In the retweet, this effect is distributed across more than one of these areas, producing more than one contrary item, or internally multiple irony. Take Trump’s summer 2016 tweet of this anti-Semitic attack on Clinton—not a proper retweet, but a resignfication of the same sort:



    The scandal that ensued mostly involved the source of the original content (white supremacists), and Trump skated through the incident by claiming that it wasn’t anti-Semitic anyway, it was a sheriff’s star, and that he had only “retweeted” the content. In disavowing the content in separate and seemingly contradictory ways,[xi] he signaled that he was still committed to its content to his base, while maintaining that he wasn’t at the level of statement. The effect was repeated again and again, and is a fundamental part of our government now. Trump’s positions are neither new nor interesting. What’s new is the way he amplifies his rhetorical maneuvers in social media. It is the exploitation of irony—not wit, not snark, not sarcasm—at the level of redundancy to maintain a signal that is internally split in multiple ways. This is not bad faith or stupidity; it’s an invasion of politics by irony. It’s also a kind of end to the neoliberal speech regime.

    iii. Irony and Politics after 2016, or Uncommunicative Capitalism

    The channel between speech and politics is open—again. That channel is saturated in irony, of a kind we are not used to thinking about. In 2003, following what were widely billed as the largest demonstrations in the history of the world, with tens of millions gathering in the streets globally to resist the George W. Bush administration’s stated intent to go to war, the United States did just that, invading Iraq on 20 March of that year. The consequences of that war have yet to be fully assessed. But while it is clear that we are living in its long foreign policy shadow, the seemingly momentous events of 2016 echo 2003 in a different way. 2016 was the year that blew open the neoliberal pax between the media, speech, and politics.

    No amount of noise could prevent the invasion of Iraq. As Jodi Dean has shown, “communicative capitalism” ensured that the circulation of signs was autotelic, proliferating language and ideology sealed off from the politics of events like war or even domestic policy. She writes that:

    In communicative capitalism, however, the use value of a message is less important than its exchange value, its contribution to a larger pool, flow or circulation of content. A contribution need not be understood; it need only be repeated, reproduced, forwarded. Circulation is the context, the condition for the acceptance or rejection of a contribution… Some contributions make a difference. But more significant is the system, the communicative network. (Dean 2005, 56)

    This situation no longer entirely holds. Dean’s brilliant analysis—along with those of many others who diagnosed the situation of media and politics in neoliberalism (e.g. Fisher 2009; Liu 2004)—forms the basis for understanding what we are living through and in now, even as the situation has changed. The notion that the invasion of Iraq could have been stopped by the protests recalls the optimism about speech’s effect on national politics of the New Left in the 1960s and after (begging the important question of whether the parallel protests against the Vietnam War played a causal role in its end). That model of speech is no longer entirely in force. Dean’s notion of a kind of metastatic media with few if any contributions that “make a difference” politically has yielded to a concerted effort to break through that isolation, to manipulate the circulatory media to make a difference. We live with communicative capitalism, but added to it is the possibility of complex rhetorical manipulation, a political possibility that resides in the irony of the very channels that made capitalism communicative in the first place.

    We know that authoritarianism engages in a kind of double-speak, talks out of “both sides of its mouth,” uses the dog whistle. It might be unusual to think of this set of techniques as irony—but I think we have to. Trump doesn’t just dog-whistle, he sends cleanly separate messages to differing effect through the same statement, as he did after Charlottesville. This technique keeps the media he is so hostile to on the hook, since their click rates are dependent on covering whatever extreme statement he’d made that day. The constant and confused coverage this led to was then a separate signal sent through the same line—by means of the contradiction between humility and vanity, and between content and effect—to his own followers. In other words, he doesn’t use Twitter only to amplify his message, but to resignify it internally. Resignificatory media allows irony to create a vector of efficacy through political discourse. That is not exactly “communicative capitalism,” but something more like the field-manipulations recently described by Johanna Drucker: affective, indirect, non-linear (Drucker 2018). Irony happens to be the tool that is not instrumental, a non-linear weapon, a kind of material-rhetorical wave one can ride but not control. As Quinn Slobodian has been arguing, we have in no way left the neoliberal era in economics. But perhaps we have left its speech regime behind. If so, that is a matter of strategic urgency for the Left.

    iv. Hegelian Media Theory

    The new Right is years ahead on this score, in practice but also in analysis. In one of the first pieces in what has become a truly staggering wave of coverage of the NRx movement, Rosie Gray interviewed Kantbot extensively (Gray 2017). Gray’s main target was the troll Mencius Moldbug (Curtis Yarvin) whose political philosophy blends the Enlightenment absolutism of Frederick the Great with a kind of avant-garde corporatism in which the state is run not on the model of a corporation but as a corporation. On the Alt Right, the German Enlightenment is unavoidable.

    In his prose, Kantbot can be quite serious, even theoretical. He responded to Gray’s article in a Medium post with a long quotation from Schiller’s 1784 “The Theater as Moral Institution” as its epigraph (Kanbot 2017b). For Schiller, one had to imitate the literary classics to become inimitable. And he thought the best means of transmission would be the theater, with its live audience and electric atmosphere. The Enlightenment theater, as Kantbot writes, “was not only a source of entertainment, but also one of radical political education.”

    Schiller argued that the stage educated more deeply than secular law or morality, that its horizon extended farther into the true vocation of the human. Culture educates where the law cannot. Schiller, it turns out, also thought that politics is downstream from culture. Kantbot finds, in other words, a source in Enlightenment literary theory for Breitbart’s signature claim. That means that narrative is crucial to political control. But Kantbot extends the point from narrative to the medium in which narrative is told.

    Schiller gives us reason to think that the arrangement of the medium—its physical layout, the possibilities but also the limits of its mechanisms of transmission—is also crucial to cultural politics (this is why it makes sense to him to replace a follower’s reference to Derrida with “*schiller”). He writes that “The theater is the common channel through which the light of wisdom streams down from the thoughtful, better part of society, spreading thence in mild beams throughout the entire state.” Story needs to be embedded in a politically effective channel, and politically-minded content-producers should pay attention to the way that channel works, what it can do that another means of communication—say, the novel—can’t.

    Kantbot argues that social media is the new Enlightenment Stage. When Schiller writes that the stage is the “common channel” for light and wisdom, he’s using what would later become Shannon’s term—in German, der Kanal. Schiller thought the channel of the stage was suited to tempering barbarisms (both unenlightened “savagery” and post-enlightened Terrors like Robespierre’s). For him, story in the proper medium could carry information and shape habits and tendencies, influencing politics indirectly, eventually creating an “aesthetic state.” That is the role that social media have today, according to Kantbot. In other words, the constraints of a putatively biological gender or race are secondary to their articulation through the utterly complex web of irony-saturated social media. Those media allow the categories in the first place, but are so complex as to impose their own constraint on freedom. For those on the Alt Right, accepting and overcoming that constraint is the task of the individual—even if it is often assigned mostly to non-white or non-male individuals, while white males achieve freedom through complaint. Consistency aside, however, the notion that media form their own constraint on freedom, and the tool for accepting and overcoming that constraint is irony, runs deep.

    Kantbot goes on to use Schiller to critique Gray’s actual article about NRx: “Though the Altright [sic] is viewed primarily as a political movement, a concrete ideology organizing an array of extreme political positions on the issues of our time, I believe that understanding it is a cultural phenomena [sic], rather than a purely political one, can be an equally valuable way of conceptualizing it. It is here that the journos stumble, as this goes directly to what newspapers and magazines have struggled to grasp in the 21st century: the role of social media in the future of mass communication.” It is Trump’s retrofitting of social media—and now the mass media as well—to his own ends that demonstrates, and therefore completes, the system of German Idealism. Content production on social media is political because it is the locus of the interface between irony and ontology, where meme magic also resides. This allows the Alt Right to sync what we have long taken to be a liberal form of speech (irony) with extremist political commitments that seem to conflict with the very rhetorical gesture. Misogyny and racism have re-entered the public sphere. They’ve done so not in spite of but with the explicit help of ironic manipulations of media.

    The trolls sync this transformation of the media with misogynist ontology. Both are construed as constraints in the forward march of Trump, Kek, and culture in general. One disturbing version of the essentialist suggestion for understanding how Trump will complete the system of German Idealism comes from one “Jef Costello” (a troll named for a character in Alain Delon’s 1967 film, Le Samouraï)

    Ironically, Hegel himself gave us the formula for understanding exactly what must occur in the next stage of history. In his Philosophy of Right, Hegel spoke of freedom as “willing our determination.” That means affirming the social conditions that make the array of options we have to choose from in life possible. We don’t choose that array, indeed we are determined by those social conditions. But within those conditions we are free to choose among certain options. Really, it can’t be any other way. Hegel, however, only spoke of willing our determination by social conditions. Let us enlarge this to include biological conditions, and other sorts of factors. As Collin Cleary has written: Thus, for example, the cure for the West’s radical feminism is for the feminist to recognize that the biological conditions that make her a woman—with a woman’s mind, emotions, and drives—cannot be denied and are not an oppressive “other.” They are the parameters within which she can realize who she is and seek satisfaction in life. No one can be free of some set of parameters or other; life is about realizing ourselves and our potentials within those parameters.

    As Hegel correctly saw, we are the only beings in the universe who seek self-awareness, and our history is the history of our self-realization through increased self-understanding. The next phase of history will be one in which we reject liberalism’s chimerical notion of freedom as infinite, unlimited self-determination, and seek self-realization through embracing our finitude. Like it or not, this next phase in human history is now being shepherded by Donald Trump—as unlikely a World-Historical Individual as there ever was. But there you have it. Yes! Donald Trump will complete the system of German Idealism. (Costello 2017)

    Note the regular features of this interpretation: it is a nature-forward argument about social categories, universalist in application, misogynist in structure, and ultra-intellectual. Constraint is shifted not only from the social into the natural, but also back into the social again. The poststructuralist phrase “embracing our finitude” (put into the emphatic italics of Theory) underscores the reversal from semiotics to ontology by way of German Idealism. Trump, it seems, will help us realize our natural places in an old-world order even while pushing the vanguard trolls forward into the utopian future. In contrast to Kantbot’s own content, this reading lacks irony. That is not to say that the anti-Gender Studies and generally viciously misogynist agenda of the Alt Right is not being amplified throughout the globe, as we increasingly hear. But this dry analysis lack the lacks the manipulative capacity that understanding social media in German Idealist terms brings with it. It does not resignify.

    Costello’s understanding is crude compared with that of Kantbot himself. The constraints, for Kantbot, are not primarily those of a naturalized gender, but instead the semiotic or rhetorical structure of the media through which any naturalization flows. The media are not likely, in this vision, to end any gender regimes—but recognizing that such regimes are contingent on representation and the manipulation of signs has never been the sole property of the Left. That manipulation implies a constrained, rather than an absolute, understanding of freedom. This constraint is an important theoretical element of the Alt Right, and in some sense they are correct to call on Hegel for it. Their thinking wavers—again, ironically—between essentialism about things like gender and race, and an understanding of constraint as primarily constituted by the media.

    Kantbot mixes his andrism and his media critique seamlessly. The trolls have some of their deepest roots in internet misogyny, including so-called Men Right’s Activism and the hashtag #redpill. The red pill that Neo takes in The Matrix to exit the collective illusion is here compared to “waking up” from the “culturally Marxist” feminism that inflects the putative communism that pervades contemporary US culture. Here is Kantbot’s version:

    The tweet elides any difference between corporate diversity culture and the Left feminism that would also critique it, but that is precisely the point. Irony does not undermine (it rather bolsters) serious misogyny. When Angela Nagle’s book, Kill All Normies: Online Culture Wars from 4Chan and Tumblr to Trump and the Alt-Right, touched off a seemingly endless Left-on-Left hot-take war, Kantbot responded with his own review of the book (since taken down). This review contains a plea for a “nuanced” understanding of Eliot Rodger, who killed six people in Southern California in 2014 as “retribution” for women rejecting him sexually.[xii] We can’t allow (justified) disgust at this kind of content to blind us to the ongoing irony—not jokes, not wit, not snark—that enables this vile ideology. In many ways, the irony that persists in the heart of this darkness allows Kantbot and his ilk to take the Left more seriously than the Left takes the Right. Gender is a crucial, but hardly the only, arena in which the Alt Right’s combination of essentialist ontology and media irony is fighting the intellectual Left.

    In the sub-subculture known as Men Going Their Own Way, or MGTOW, the term “volcel” came to prominence in recent years. “Volcel” means “voluntarily celibate,” or entirely ridding one’s existence of the need for or reliance on women. The trolls responded to this term with the notion of an “incel,” someone “involuntarily celibate,” in a characteristically self-deprecating move. Again, this is irony: none of the trolls actually want to be celibate, but they claim a kind of joy in signs by recoding the ridiculous bitterness of the Volcel.

    Literalizing the irony already partly present in this discourse, sometime in the fall of 2016 the trolls started calling the Left –in particular the members of the podcast team Chapo Trap House and the journalist and cultural theorist Sam Kriss (since accused of sexual harassment)—“ironycels.” The precise definition wavers, but seems to be that the Leftists are failures at irony, “irony-celibate,” even “involuntarily incapable of irony.”

    Because the original phrase is split between voluntary and involuntary, this has given rise to reappropriations, for example Kriss’s, in which “doing too much irony” earns you literal celibacy.

    Kantbot has commented extensively, both in articles and on podcasts, on this controversy. He and Kriss have even gone head-to-head.[xiii]




    In the ironycel debate, it has become clear that Kantbot thinks that socialism has kneecapped the Left, but only sentimentally. The same goes for actual conservatism, which has prevented the Right from embracing its new counterculture. Leaving behind old ideologies is a symptom for standing at the vanguard of a civilizational shift. It is that shift that makes sense of the phrase “Trump will Complete the System of German Idealism.”

    The Left, LogoDaedalus intoned on a podcast, is “metaphysically stuck in the Bush era.” I take this to mean that the Left is caught in an endless cycle of recriminations about the neoliberal model of politics, even as that model has begun to become outdated. Kantbot writes, in an article called “Chapo Traphouse Will Never Be Edgy”:

    Capturing the counterculture changes nothing, it is only by the diligent and careful application of it that anything can be changed. Not politics though. When political ends are selected for aesthetic means, the mismatch spells stagnation. Counterculture, as part of culture, can only change culture, nothing outside of that realm, and the truth of culture which is to be restored and regained is not a political truth, but an aesthetic one involving the ultimate truth value of the narratives which pervade our lived social reality. Politics are always downstream. (Kantbot 2017a)

    Citing Breitbart’s motto, Kantbot argues that continents of theory separate him and LogoDaedalus from the Left. That politics is downstream from culture is precisely what Marx—and by extension, the contemporary Left—could not understand. On several recent podcasts, Kantbot has made just this argument, that the German Enlightenment struck a balance between the “vitality of aesthetics” and political engagement that the Left lost in the generation after Hegel.

    Kantbot has decided, against virtually every Hegel reader since Hegel and even against Hegel himself, that the system of German Idealism is ironic in its deep structure. It’s not a move we can afford to take lightly. This irony, generalized as Schlegel would have it, manipulates the formal and meta settings of communicative situations and thus is at the incipient point of any solidarity. It gathers community through mediation even as it rejects those not in the know. It sits at the membrane of the filter bubble, and—correctly used—has the potential to break or reform the bubble. To be clear, I am not saying that Kantbot has done this work. It is primarily Donald Trump, according to Kantbot’s own argument, who has done this work. But this is exactly what it means to play Hegel to Trump’s Napoleon: to provide the metaphysics for the historical moment, which happens to be the moment where social media and politics combine. Philosophy begins only after an early-morning sleepless tweetstorm once again determines a news cycle. Irony takes its proper place, as Schlegel had suggested, in human history, becoming a political weapon meant to manipulate communication.

    Kantbot was the media theorist of Trump’s ironic moment. The channeling of affect is irreducible, but not unchangeable: this is both the result of some steps we can only wish we’d taken in theory and used in politics before the Alt Right got there, and the actual core of what we might call Alt Right Media Theory. When they say “the Left can’t meme,” in other words, they’re accusing the socialist Left of being anti-intellectual about the way we communicate now, about the conditions and possibilities of social media’s amplifications of the capacity called irony that is baked in to cognition and speech so deeply that we can barely define it even partially. That would match the sense of medium we get from looking at Shannon again, and the raw material possibility with which Schlegel infused the notion of irony.

    This insight, along with its political activation, might have been the preserve of Western Marxism or the other critical theories that succeeded it. Why have we allowed the Alt Right to pick up our tools?

    Kantbot takes obvious pleasure in the irony of using poststructuralist tools, and claiming in a contrarian way that they really derive from a broadly construed German Enlightenment that includes Romanticism and Idealism. Irony constitutes both that Enlightenment itself, on this reading, and the attitude towards it on the part of the content-producers, the German Idealist Trolls. It doesn’t matter if Breitbart was right about the Frankfurt School, or if the Neoreactionaries are right about capitalism. They are not practicing what Hegel called “representational thinking,” in which the goal is to capture a picture of the world that is adequate to it. They are practicing a form of conceptual thinking, which in Hegel’s terms is that thought that is embedded in, constituted by, and substantially active within the causal chain of substance, expression, and history.[xiv] That is the irony of Hegel’s reincarnation after the end of history.

    In media analysis and rhetorical analysis, we often hear the word “materiality” used as a substitute for durability, something that is not easy to manipulate. What is material, it is implied, is a stabilizing factor that allows us to understand the field of play in which signification occurs. Dean’s analysis of the Iraq War does just this, showing the relationship of signs and politics that undermines the aspirational content of political speech in neoliberalism. It is a crucial move, and Dean’s analysis remains deeply informative. But its type—and even the word “material,” used in this sense—is, not to put too fine a point on it, neo-Kantian: it seeks conditions and forms that undergird spectra of possibility. To this the Alt Right has lodged a Hegelian eppur si muove, borrowing techniques that were developed by Marxists and poststructuralists and German Idealists, and remaking the world of mediated discourse. That is a political emergency in which the humanities have a special role to play—but only if we can dispense with political and academic in-fighting and turn our focus to our opponents. What Mark Fisher once called the “Vampire castle” of the Left on social media is its own kind of constraint on our progress (Fisher 2013). One solvent for it is irony in the expanded field of social media—not jokes, not snark, but dedicated theoretical investigation and exploitation of the rhetorical features of our systems of communication. The situation of mediated communication is part of the objective conjuncture of the present, one that the humanities and the Left cannot afford to ignore, and cannot avoid by claiming not to participate. The alternative to engagement is to cede the understanding, and quite possibly the curve, of civilization, to the global Alt Right.

    _____

    Leif Weatherby is Associate Professor of German and founder of the Digital Theory Lab at NYU. He is working on a book about cybernetics and German Idealism.

    Back to the essay

    _____

    Notes
    [i] Video here. The comment thread on the video generated a series of unlikely slogans for 2020: “MAKE TRANSCENDENTAL IDENTITY GREAT AGAIN,” “Make German Idealism real again,” and the ideological non sequitur “Make dialectical materialism great again.”

    [ii] Neiwert (2017) tracks the rise of extreme Right violence and media dissemination from the 1990s to the present, and is particularly good on the ways in which these movements engage in complex “double-talk” and meta-signaling techniques, including irony in the case of the Pepe meme.

    [iii] I’m going to use this term throughout, and refer readers to Chip Berlet’s useful resource: I’m hoping this article builds on a kind of loose consensus that the Alt Right “talks out of both sides of its mouth,” perhaps best crystallized in the term “dog whistle.” Since 2016, we’ve seen a lot of regular whistling, bigotry without disguise, alongside the rise of the type of irony I’m analyzing here.

    [iv] There is, in this wing of the Online Right, a self-styled “autism” that stands for being misunderstood and isolated.

    [v] Thanks to Moira Weigel for a productive exchange on this point.

    [vi] See the excellent critique of object-oriented ontologies on the basis of their similarities with object-oriented programming languages in Galloway 2013. Irony is precisely the condition that does not reproduce code representationally, but instead shares a crucial condition with it.

    [vii] The paper is a point of inspiration and constant return for Friedrich Kittler, who uses this diagram to demonstrate the dependence of culture on media, which, as his famous quip goes, “determine our situation.” Kittler 1999, xxxix.

    [viii] This kind of redundancy is conceptually separate from signal redundancy, like the strengthening or reduplicating of electrical impulses in telegraph wires. The latter redundancy is likely the first that comes to mind, but it is not the only kind Shannon theorized.

    [ix] This is because Shannon adopts Ludwig Boltzmann’s probabilistic formula for entropy. The formula certainly suggests the slow simplification of material structure, but this is irrelevant to the communications engineering problem, which exists only so long as there are the very complex structures called humans and their languages and communications technologies.

    [x] Shannon presented these findings at one of the later Macy Conferences, the symposia that founded the movement called “cybernetics.” For an excellent account of what Shannon called “Printed English,” see Liu 2010, 39-99.

    [xi] The disavowal follows Freud’s famous “kettle logic” fairly precisely. In describing disavowal of unconscious drives unacceptable to the ego and its censor, Freud used the example of a friend who returns a borrowed kettle broken, and goes on to claim that 1) it was undamaged when he returned it, 2) it was already damaged when he borrowed it, and 3) he never borrowed it in the first place. Zizek often uses this logic to analyze political events, as in Zizek 2005. Its ironic structure usually goes unremarked.

    [xii] Kantbot, “Angela Nagle’s Wild Ride,” http://thermidormag.com/angela-nagles-wild-ride/, visited August 15, 2017—link currently broken.

    [xiii] Kantbot does in fact write fiction, almost all of which is science-fiction-adjacent retoolings of narrative from German Classicism and Romanticism. The best example is his reworking of E.T.A. Hoffmann’s “A New Year’s Eve Adventure,” “Chic Necromancy,” Kantbot 2017c.

    [xiv] I have not yet seen a use of Louis Althusser’s distinction between representation and “theory” (which relies on Hegel’s distinction) on the Alt Right, but it matches their practice quite precisely.

    _____

    Works Cited

    • Beckett, Andy. 2017. “Accelerationism: How a Fringe Philosophy Predicted the Future We Live In.” The Guardian (May 11).
    • Behler, Ernst. 1990. Irony and the Discourse of Modernity. Seattle: University of Washington.
    • Berkowitz, Bill. 2003. “ ‘Cultural Marxism’ Catching On.” Southern Poverty Law Center.
    • Breitbart, Andrew. 2011. Righteous Indignation: Excuse Me While I Save the World! New York: Hachette.
    • Burton, Tara. 2016. “Apocalypse Whatever: The Making of a Racist, Sexist Religion of Nihilism on 4chan.” Real Life Mag (Dec 13).
    • Costello, Jef. 2017. “Trump Will Complete the System of German Idealism!” Counter-Currents Publishing (Mar 10).
    • de Man, Paul. 1996. “The Concept of Irony.” In de Man, Aesthetic Ideology. Minneapolis: University of Minnesota. 163-185.
    • Dean, Jodi. 2005. “Communicative Capitalism: Circulation and the Foreclosure of Politics.” Cultural Politics 1:1. 51-74.
    • Drucker, Johanna. The General Theory of Social Relativity. Vancouver: The Elephants.
    • Feldman, Brian. 2017. “The ‘Ironic’ Nazi is Coming to an End.” New York Magazine.
    • Fisher, Mark. 2009. Capitalist Realism: Is There No Alternative? London: Zer0.
    • Fisher, Mark. 2013. “Exiting the Vampire Castle.” Open Democracy (Nov 24).
    • Floridi, Luciano. 2010. Information: A Very Short Introduction. Oxford: Oxford.
    • Galloway, Alexander. 2013. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39:2. 347-66.
    • Goerzen, Matt. 2017. “Notes Towards the Memes of Production.” texte zur kunst (Jun).
    • Gray, Rosie. 2017. “Behind the Internet’s Dark Anti-Democracy Movement.” The Atlantic (Feb 10).
    • Haider, Shuja. 2017. “The Darkness at the End of the Tunnel: Artificial Intelligence and Neorreaction.” Viewpoint Magazine.
    • Hayles, N. Katherine. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
    • Higgins, Richard. 2017. “POTUS and Political Warfare.” National Security Council Memo.
    • Huyssen, Andreas. 2017. “Breitbart, Bannon, Trump, and the Frankfurt School.” Public Seminar (Sep 28).
    • Jay, Martin. 2011. “Dialectic of Counter-Enlightenment: The Frankfurt School as Scapegoat of the Lunatic Fringe.” Salmagundi 168/169 (Fall 2010-Winter 2011). 30-40. Excerpt at Canisa.Org.
    • Kantbot (as Edward Waverly). 2017a. “Chapo Traphouse Will Never Be Edgy
    • Kantbot. 2017b. “All the Techcomm Blogger’s Men.” Medium.
    • Kantbot. 2017c. “Chic Necromancy.” Medium.
    • Kittler, Friedrich. 1999. Gramophone, Film, Typewriter. Translated by Geoffrey Winthrop-Young and Michael Wutz. Stanford: Stanford University Press.
    • Liu, Alan. 2004. “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse.” Critical Inquiry 31:1. 49-84.
    • Liu, Lydia. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Marwick, Alice and Rebecca Lewis. 2017. “Media Manipulation and Disinformation Online.” Data & Society.
    • Milner, Ryan. 2016. The World Made Meme: Public Conversations and Participatory Media. Cambridge: MIT.
    • Neiwert, David. 2017. Alt-America: The Rise of the Radical Right in the Age of Trump. New York: Verso.
    • Noys, Benjamin. 2014. Malign Velocities: Accelerationism and Capitalism. London: Zer0.
    • Phillips, Whitney and Ryan M. Milner. 2017. The Ambivalent Internet: Mischief, Oddity, and Antagonism Online. Cambridge: Polity.
    • Phillips, Whitney. 2016. This is Why We Can’t Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture. Cambridge: The MIT Press.
    • Quintilian. 1920. Institutio Oratoria, Book VIII, section 6, 53-55.
    • Schlegel, Friedrich. 1958–. Kritische Friedrich-Schlegel-Ausgabe. Vol. II. Edited by Ernst Behler, Jean Jacques Anstett, and Hans Eichner. Munich: Schöningh.
    • Shannon, Claude, and Warren Weaver. 1964. The Mathematical Theory of Communication. Urbana: University of Illinois Press.
    • Stone, Biz. 2009. “Retweet Limited Rollout.” Press release. Twitter (Nov 6).
    • Walsh, Michael. 2017. The Devil’s Pleasure Palace: The Cult of Critical Theory and the Subversion of the West. New York: Encounter Books.
    • Winter, Jana and Elias Groll. 2017. “Here’s the Memo that Blew Up the NSC.” Foreign Policy (Aug 10).
    • Žižek, Slavoj. 1993. Tarrying with the Negative: Kant, Hegel and the Critique of Ideology. Durham: Duke, 1993.
    • Žižek, Slavoj. 2005. Iraq: The Borrowed Kettle. New York: Verso.

     

  • A Dark, Warped Reflection

    A Dark, Warped Reflection

    Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )a review of Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )
    by Zachary Loeb
    ~

    Depending upon which sections of the newspaper one reads, it is very easy to come away with two rather conflicting views of the future. If one begins the day by reading the headlines in the “International News” or “Environment” it is easy to feel overwhelmed by a sense of anxiety and impending doom; however, if one instead reads the sections devoted to “Business” or “Technology” it is easy to feel confident that there are brighter days ahead. We are promised that soon we shall live in wondrous “Smart” homes where all of our devices work together tirelessly to ensure our every need is met even while drones deliver our every desire even as we enjoy ever more immersive entertainment experiences with all of this providing plenty of wondrous investment opportunities…unless of course another economic collapse or climate change should spoil these fantasies. Though the juxtaposition between newspaper sections can be jarring an element of anxiety can generally be detected from one section to the next – even within the “technology” pages. After all, our devices may have filled our hours with apps and social networking sites, but this does not necessarily mean that they have left us more fulfilled. We have been supplied with all manner of answers, but this does not necessarily mean we had first asked any questions.

    [youtube https://www.youtube.com/watch?v=pimqGkBT6Ek&w=560&h=315]

    If you could remember everything, would you want to? If a cartoon bear lampooned the pointlessness of elections, would you vote for the bear? Would you participate in psychological torture, if the person being tortured was a criminal? What lengths would you turn to if you could not move-on from a loved one’s death? These are the types of questions posed by the British television program Black Mirror, wherein anxiety about the technologically riddled future, be it the far future or next week, is the core concern. The paranoid pessimism of this science-fiction anthology program is not a result of a fear of the other or of panic at the prospect of nuclear annihilation – but is instead shaped by nervousness at the way we have become strangers to ourselves. There are no alien invaders, occult phenomena, nor is there a suit wearing narrator who makes sure that the viewers understand the moral of each story. Instead what Black Mirror presents is dread – it holds up a “black mirror” (think of any electronic device when the power on the screen is off) to society and refuses to flinch at the reflection.

    Granted, this does not mean that those viewing the program will not flinch.

    [And Now A Brief Digression]

    Before this analysis goes any further it seems worthwhile to pause and make a few things clear. Firstly, and perhaps most importantly, the intention here is not to pass a definitive judgment on the quality of Black Mirror. While there are certainly arguments that can be made regarding how “this episode was better than that one” – this is not the concern here. Nor for that matter is the goal to scoff derisively at Black Mirror and simply dismiss of it – the episodes are well written, interestingly directed, and strongly acted. Indeed, that the program can lead to discussion and introspection is perhaps the highest praise that one can bestow upon a piece of widely disseminated popular culture. Secondly, and perhaps even more importantly (depending on your opinion), some of the episodes of Black Mirror rely upon twists and surprises in order to have their full impact upon the viewer. Oftentimes people find it highly frustrating to have these moments revealed to them ahead of time, and thus – in the name of fairness – let this serve as an official “spoiler warning.” The plots of each episode will not be discussed in minute detail in what follows – as the intent here is to consider broader themes and problems – but if you hate “spoilers” you should consider yourself warned.

    [Digression Ends]

    The problem posed by Black Mirror is that in building nervous narratives about the technological tomorrow the program winds up replicating many of the shortcomings of contemporary discussions around technology. Shortcomings that make such an unpleasant future seem all the more plausible. While Black Mirror may resist the obvious morality plays of a show like The Twilight Zone, the moral of the episodes may be far less oppositional than they at first seem. The program draws much of its emotional heft by narrowly focusing its stories upon specific individuals, but in so doing the show may function as a sort of precognitive “usage manual,” one that advises “if a day should arrive when you can technologically remember everything…don’t be like the guy in this episode.” The episodes of Black Mirror may call upon viewers to look askance at the future it portrays, but it also encourages the sort of droll inured acceptance that is characteristic of the people in each episode of the program. Black Mirror is a sleek, hip, piece of entertainment, another installment in the contemporary “golden age of television” wherein it risks becoming just another program that can be streamed onto any of a person’s black mirror like screens. The program is itself very much a part of the same culture industry of the YouTube and Twitter era that the show seems to vilify – it is ready made for “binge watching.” The program may be disturbing, but its indictments are soft – allowing viewers a distance that permits them to say aloud “I would never do that” even as they are subconsciously unsure.

    Thus, Black Mirror appears as a sort of tragic confirmation of the continuing validity of Jacques Ellul’s comment:

    “One cannot but marvel at an organization which provides the antidote as it distills the poison.” (Ellul, 378)

    For the tales that are spun out in horrifying (or at least discomforting) detail on Black Mirror may appear to be a salve for contemporary society’s technological trajectory – but the show is also a ready made product for the very age that it is critiquing. A salve that does not solve anything, a cultural shock absorber that allows viewers to endure the next wave of shocks. It is a program that demands viewers break away from their attachment to their black mirrors even as it encourages them to watch another episode of Black Mirror. This is not to claim that the show lacks value as a critique; however, the show is less a radical indictment than some may be tempted to give it credit for being. The discomfort people experience while watching the show easily becomes a masochistic penance that allows people to continue walking down the path to the futures outlined in the show. Black Mirror provides the antidote, but it also distills the poison.

    That, however, may be the point.

    [Interrogation 1: Who Bears Responsibility?]

    Technology is, of course, everywhere in Black Mirror – in many episodes it as much of a character as the humans who are trying to come to terms with what the particular device means. In some episodes (“The National Anthem” or “The Waldo Moment”) the technologies that feature prominently are those that would be quite familiar to contemporary viewers: social media platforms like YouTube, Twitter, Facebook and the like. Whilst in other episodes (“The Complete History of You,” “White Bear” and “Be Right Back”) the technologies on display are new and different: an implantable device that records (and can play back) all of one’s memories, something that can induce temporary amnesia, a company that has developed a being that is an impressive mix of robotics and cloning. The stories that are told in Black Mirror, as was mentioned earlier, focus largely on the tales of individuals – “Be Right Back” is primarily about one person’s grief – and though this is a powerful story-telling device (and lest there be any confusion – many of these are very powerfully told stories) one of the questions that lingers unanswered in the background of many of these episodes is: who is behind these technologies?

    In fairness, Black Mirror would likely lose some of its effectiveness in terms of impact if it were to delve deeply into this question. If “The Complete History of You” provided a sci-fi faux-documentary foray into the company that had produced the memory recording “grains” it would probably not have felt as disturbing as the tale of abuse, sex, violence and obsession that the episode actually presents. Similarly, the piece of science-fiction grade technology upon which “White Bear” relies, functions well in the episode precisely because the key device makes only a rather brief appearance. And yet here an interesting contrast emerges between the episodes set in, or closely around, the present and those that are set further down the timeline – for in the episodes that rely on platforms like YouTube, the viewer technically knows who the interests are behind the various platforms. The episode “The Complete History of You” may be intensely disturbing, but what company was it that developed and brought the “grains” to market? What biotechnology firm supplies the grieving spouse in “Be Right Back” with the robotic/clone of her deceased husband? Who gathers the information from these devices? Where does that information live? Who is profiting? These are important questions that go unanswered, largely because they go unasked.

    Of course, it can be simple to disregard these questions. Dwelling upon them certainly does take something away from the individual episodes and such focus diminishes the entertainment quality of Black Mirror. This is fundamentally why it is so essential to insist that these critical questions be asked. The worlds depicted in episodes of Black Mirror did not “just happen” but are instead a result of layers upon layers of decisions and choices that have wound up shaping these characters lives – and it is questionable how much say any of these characters had in these decisions. This is shown in stark relief in “The National Anthem” in which a befuddled prime minister cannot come to grips with the way that a threat uploaded to YouTube along with shifts in public opinion, as reflected on Twitter, has come to require him to commit a grotesque act; his despair at what he is being compelled to do is a reflection of the new world of politics created by social media. In some ways it is tempting to treat episodes like “The Complete History of You” and “Be Right Back” as retorts to an unflagging adoration for “innovation,” “disruption,” and “permissionless innovation” – for the episodes can be read as a warning that just because we can record and remember everything, does not necessarily mean that we should. And yet the presence of such a cultural warning does not mean that such devices will not eventually be brought to market. The denizens of the worlds of Black Mirror are depicted as being at the mercy of the technological current.

    Thus, and here is where the problem truly emerges, the episodes can be treated as simple warnings that state “well, don’t be like this person.” After all, the world of “The Complete History of You” seems to be filled with people who – unlike the obsessive main character – can use the “grain” productively; on a similar note it can be easy to imagine many people pointing to “Be Right Back” and saying that the idea of a robotic/clone could be wonderful – just don’t use it to replicate the recently dead; and of course any criticism of social media in “The Waldo Moment” or “The National Anthem” can be met with a retort regarding a blossoming of free expression and the ways in which such platforms can help bolster new protest movements. And yet, similar to the sad protagonist in the film Her, the characters in the story lines of Black Mirror rarely appear as active agents in relation to technology even when they are depicted as truly “choosing” a given device. Rather they have simply been reduced to consumers – whether they are consumers of social media, political campaigns, or an amusement park where the “show” is a person being psychologically tortured day after day.

    This is not to claim that there should be an Apple or Google logo prominently displayed on the “grain” or on the side of the stationary bikes in “Fifteen Million Merits,” nor is it to argue that the people behind these devices should be depicted as cackling corporate monsters – but it would be helpful to have at least some image of the people behind these devices. After all, there are people behind these devices. What were they thinking? Were they not aware of these potential risks? Did they not care? Who bears responsibility? In focusing on the small scale human stories Black Mirror ignores the fact that there is another all too human story behind all of these technologies. Thus what the program riskily replicates is a sort of technological determinism that seems to have nestled itself into the way that people talk about technology these days – a sentiment in which people have no choice but to accept (and buy) what technology firms are selling them. It is not so much, to borrow a line from Star Trek, that “resistance is futile” as that nobody seems to have even considered resistance to be an option in the first place. Granted, we have seen in the not too distant past that such a sentiment is simply not true – Google Glass was once presented as inevitable but public push-back helped lead to Google (at least temporarily) shelving the device. Alas, one of the most effective ways of convincing people that they are powerless to resist is by bludgeoning them with cultural products that tell them they are powerless to resist. Or better yet, convince them that they will actually like being “assimilated.”

    Therefore, the key thing to mull over after watching an episode of Black Mirror is not what is presented in the episode but what has been left out. Viewers need to ask the questions the show does not present: who is behind these technologies? What decisions have led to the societal acceptance of these technologies? Did anybody offer resistance to these new technologies? The “6 Questions to Ask of New Technology” posed by media theorist Neil Postman may be of use for these purposes, as might some of the questions posed in Riddled With Questions. The emphasis here is to point out that a danger of Black Mirror is that the viewer winds up being just like one of the characters : a person who simply accepts the technologically wrought world in which they are living without questioning those responsible and without thinking that opposition is possible.

    [Interrogation 2: Utopia Unhinged is not a Dystopia]

    “Dystopia” is a term that has become a fairly prominent feature in popular entertainment today. Bookshelves are filled with tales of doomed futures and many of these titles (particularly those aimed at the “young adult” audience) have a tendency to eventually reach the screens of the cinema. Of course, apocalyptic visions of the future are not limited to the big screen – as numerous television programs attest. For many, it is tempting to use terms such as “dystopia” when discussing the futures portrayed in Black Mirror and yet the usage of such a term seems rather misleading. True, at least one episode (“Fifteen Million Merits”) is clearly meant to evoke a dystopian far future, but to use that term in relation to many of the other installments seems a bit hyperbolic. After all, “The Waldo Moment” could be set tomorrow and frankly “The National Anthem” could have been set yesterday. To say that Black Mirror is a dystopian show risks taking an overly simplistic stance towards technology in the present as well as towards technology in the future – if the claim is that the show is thoroughly dystopian than how does one account for the episodes that may as well be set in the present? One can argue that the state of the present world is far less than ideal, one can cast a withering gaze in the direction of social media, one can truly believe that the current trajectory (if not altered) will lead in a negative direction…and yet one can believe all of these things and still resist the urge to label contemporary society a dystopia. Doom saying can be an enjoyably nihilistic way to pass an afternoon, but it makes for a rather poor critique.

    It may be that what Black Mirror shows is how a dystopia can actually be a private hell instead of a societal one (which would certainly seem true of “White Bear” or “The Complete History of You”), or perhaps what Black Mirror indicates is that a derailed utopia is not automatically a dystopia. Granted, a major criticism of Black Mirror could emphasize that the show has a decidedly “industrialized world/Western world” focus – we do not see the factories where “grains” are manufactured and the varieties of new smart phones seen in the program suggest that the e-waste must be piling up somewhere. In other words – the derailed utopia of some could still be an outright dystopia for countless others. That the characters in Black Mirror do not seem particularly concerned with who assembled their devices is, alas, a feature all too characteristic of technology users today. Nevertheless, to restate the problem, the issue is not so much the threat of dystopia as it is the continued failure of humanity to use its impressive technological ingenuity to bring about a utopia (or even something “better” than the present). In some ways this provides an echo of Lewis Mumford’s comment, in The Story of Utopias, that:

    “it would be so easy, this business of making over the world if it were only a matter of creating machinery.” (Mumford, 175)

    True, the worlds of Black Mirror, including the ones depicting the world of today, show that “creating machinery” actually is an easy way “of making over the world” – however this does not automatically push things in the utopian direction for which Mumford was pining. Instead what is on display is another installment of the deferred potential of technology.

    The term “another” is not used incidentally here, but is specifically meant to point to the fact that it is nothing new for people to see technology as a source for hope…and then to woefully recognize the way in which such hopes have been dashed time and again. Such a sentiment is visible in much of Walter Benjamin’s writing about technology – writing, as he was, after the mechanized destruction of WWI and on the eve of the technologically enhanced barbarity of WWII. In Benjamin’s essay “Eduard Fuchs, Collector and Historian ” he criticizes a strain in positivist/social democratic thinking that had emphasized that technological developments would automatically usher in a more just world, when in fact such attitudes woefully failed to appreciate the scale of the dangers. This leads Benjamin to note:

    “A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the past century: the bungled reception of technology. The process has consisted of a series of energetic, constantly renewed efforts, all attempting to overcome the fact that technology serves this society only by producing commodities.” (Benjamin, 266)

    The century about which Benjamin was writing was not the twenty-first century, and yet these comments about “the bungled reception of technology” and technology which “serves this society only be producing commodities” seems a rather accurate description of the worlds depicted by Black Mirror. And yes, that certainly includes the episodes that are closer to our own day. The point of pulling out this tension; however, is to emphasize not the dystopian element of Black Mirror but to point to the “bungled reception” that is so clearly on display in the program – and by extension in the present day.

    What Black Mirror shows in episode after episode (even in the clearly dystopian one) is the gloomy juxtaposition between what humanity can possibly achieve and what it actually achieves. The tools that could widen democratic participation can be used to allow a cartoon bear to run as a stunt candidate, the devices that allow us to remember the past can ruin the present by keeping us constantly replaying our memories yesterday, the things that can allow us to connect can make it so that we are unable to ever let go – “energetic, constantly renewed efforts” that all wind up simply “producing commodities.” Indeed, in a tragic-comic turn, Black Mirror demonstrates that amongst the commodities we continue to produce are those that elevate the “bungled reception of technology” to the level of a widely watched and critically lauded television serial.

    The future depicted by Black Mirror may be startling, disheartening and quite depressing, but (except in the cases where the content is explicitly dystopian) it is worth bearing in mind that there is an important difference between dystopia and a world of people living amidst the continued “bungled reception of technology.” Are the people in “The National Anthem” paving the way for “White Bear” and in turn setting the stage for “Fifteen Million Merits?” It is quite possible. But this does not mean that the “reception of technology” must always be “bungled” – though changing our reception of it may require altering our attitude towards it. Here Black Mirror repeats its problematic thrust, for it does not highlight resistance but emphasizes the very attitudes that have “bungled” the reception and which continue to bungle the reception. Though “Fifteen Million Merits” does feature a character engaging in a brave act of rebellion, this act is immediately used to strengthen the very forces against which the character is rebelling – and thus the episode repeats the refrain “don’t bother resisting, it’s too late anyways.” This is not to suggest that one should focus all one’s hopes upon a farfetched utopian notion, or put faith in a sense of “hope” that is not linked to reality, nor does it mean that one should don sackcloth and begin mourning. Dystopias are cheap these days, but so are the fake utopian dreams that promise a world in which somehow technology will solve all of our problems. And yet, it is worth bearing in mind another comment from Mumford regarding the possibility of utopia:

    “we cannot ignore our utopias. They exist in the same way that north and south exist; if we are not familiar with their classical statements we at least know them as they spring to life each day in our minds. We can never reach the points of the compass; and so no doubt we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all.” (Mumford, 28/29)

    Black Mirror provides a stark portrait of the fake utopian lure that can lead us to the world to which we do not want to go – a world in which the “bungled reception of technology” continues to rule – but in staring horror struck at where we do not want to go we should not forget to ask where it is that we do want to go. The worlds of Black Mirror are steps in the wrong direction – so ask yourself: what would the steps in the right direction look like?

    [Final Interrogation – Permission to Panic]

    During “The Complete History of You” several characters enjoy a dinner party in which the topic of discussion eventually turns to the benefits and drawbacks of the memory recording “grains.” Many attitudes towards the “grains” are voiced – ranging from individuals who cannot imagine doing without the “grain” to a woman who has had hers violently removed and who has managed to adjust. While “The Complete History of You” focuses on an obsessed individual who cannot cope with a world in which everything can be remembered what the dinner party demonstrates is that the same world contains many people who can handle the “grains” just fine. The failed comedian who voices the cartoon bear in “The Waldo Moment” cannot understand why people are drawn to vote for the character he voices – but this does not stop many people from voting for the animated animal. Perhaps most disturbingly the woman at the center of “White Bear” cannot understand why she is followed by crowds filming her on their smart phones while she is hunted by masked assailants – but this does not stop those filming her from playing an active role in her torture. And so on…and so on…Black Mirror shows that in these horrific worlds, there are many people who are quite content with the new status quo. But that not everybody is despairing simply attests to Theodor Adorno and Max Horkheimer’s observation that:

    “A happy life in a world of horror is ignominiously refuted by the mere existence of that world. The latter therefore becomes the essence, the former negligible.” (Adorno and Horkheimer, 93)

    Black Mirror is a complex program, made all the more difficult to consider as the anthology character of the show makes each episode quite different in terms of the issues that it dwells upon. The attitudes towards technology and society that are subtly suggested in the various episodes are in line with the despairing aura that surrounds the various protagonists and antagonists of the episodes. Yet, insofar as Black Mirror advances an ethos it is one of inured acceptance – it is a satire that is both tragedy and comedy. The first episode of the program, “The National Anthem,” is an indictment of a society that cannot tear itself away from the horrors being depicted on screens in a television show that owes its success to keeping people transfixed to horrors being depicted on their screens. The show holds up a “black mirror” to society but what it shows is a world in which the tables are rigged and the audience has already lost – it is a magnificently troubling cultural product that attests to the way the culture industry can (to return to Ellul) provide the antidote even as it distills the poison. Or, to quote Adorno and Horkheimer again (swap out the word “filmgoers” with “tv viewers”):

    “The permanently hopeless situations which grind down filmgoers in daily life are transformed by their reproduction, in some unknown way, into a promise that they may continue to exist. The one needs only to become aware of one’s nullity, to subscribe to one’s own defeat, and one is already a party to it. Society is made up of the desperate and thus falls prey to rackets.” (Adorno and Horkheimer, 123)

    This is the danger of Black Mirror that it may accustom and inure its viewers to the ugly present it displays while preparing them to fall prey to the “bungled reception” of tomorrow – it inculcates the ethos of “one’s own defeat.” By showing worlds in which people are helpless to do anything much to challenge the technological society in which they have become cogs Black Mirror risks perpetuating the sense that the viewers are themselves cogs, that the viewers are themselves helpless. There is an uncomfortable kinship between the tv viewing characters of “The National Anthem” and the real world viewer of the episode “The National Anthem” – neither party can look away. Or, to put it more starkly: if you are unable to alter the future why not simply prepare yourself for it by watching more episodes of Black Mirror? At least that way you will know which characters not to imitate.

    And yet, despite these critiques, it would be unwise to fully disregard the program. It is easy to pull out comments from the likes of Ellul, Adorno, Horkheimer and Mumford that eviscerate a program such as Black Mirror but it may be more important to ask: given Black Mirror’s shortcomings, what value can the show still have? Here it is useful to recall a comment from Günther Anders (whose pessimism was on par with, or exceeded, any of the aforementioned thinkers) – he was referring in this comment to the works of Kafka, but the comment is still useful:

    “from great warnings we should be able to learn, and they should help us to teach others.” (Anders, 98)

    This is where Black Mirror can be useful, not as a series that people sit and watch, but as a piece of culture that leads people to put forth the questions that the show jumps over. At its best what Black Mirror provides is a space in which people can discuss their fears and anxieties about technology without worrying that somebody will, farcically, call them a “Luddite” for daring to have such concerns – and for this reason alone the show may be worthwhile. By highlighting the questions that go unanswered in Black Mirror we may be able to put forth the very queries that are rarely made about technology today. It is true that the reflections seen by staring into Black Mirror are dark, warped and unappealing – but such reflections are only worth something if they compel audiences to rethink their relationships to the black mirrored surfaces in their lives today and which may be in their lives tomorrow. After all, one can look into the mirror in order to see the dirt on one’s face or one can look in the mirror because of a narcissistic urge. The program certainly has the potential to provide a useful reflection, but as with the technology depicted in the show, it is all too easy for such a potential reception to be “bungled.”

    If we are spending too much time gazing at black mirrors, is the solution really to stare at Black Mirror?

    The show may be a satire, but if all people do is watch, then the joke is on the audience.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. Dialectic of Enlightenment: Philosophical Fragments. Stanford: Stanford University Press, 2002.
    • Anders, Günther. Franz Kafka. New York: Hilary House Publishers LTD, 1960.
    • Benjamin, Walter. Walter Benjamin: Selected Writings. Volume 3, 1935-1938. Cambridge: The Belknap Press, 2002.
    • Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.
    • Mumford, Lewis. The Story of Utopias. Bibliobazaar, 2008.
  • Is the Network a Brain?

    Is the Network a Brain?

    Pickering, Cybernetic Braina review of Andrew Pickering, The Cybernetic Brain: Sketches of Another Future (University of Chicago Press, 2011)
    by Jonathan Goodwin
    ~

    Evgeny Morozov’s recent New Yorker article about Project Cybersyn in Allende’s Chile caused some controversy when critics accused Morozov of not fully acknowledging his sources. One of those sources was sociologist of science Andrew Pickering’s The Cybernetic Brain. Morozov is quoted as finding Pickering’s book “awful.” It’s unlikely that Morozov meant “awful” in the sense of “awe-inspiring,” but that was closer to my reaction after reading Pickering’s 500+ pp. work on the British tradition in cybernetics. This tradition was less militarist and more artistic, among other qualities, in Pickering’s account, than is popularly understood. I found myself greatly intrigued—if not awed—by the alternate future that his subtitle and final chapter announces. Cybernetics is now a largely forgotten dead-end in science. And the British tradition that Pickering describes had relatively little influence within cybernetics itself. So what is important about it now, and what is the nature of this other future that Pickering sketches?

    The major figures of this book, which proceeds with overviews of their careers, views, and accomplishments, are Grey Walter, Ross Ashby, Gregory Bateson, R. D. Laing, Stafford Beer, and Gordon Pask. Stuart Kauffman’s and Stephen Wolfram’s work on complexity theory also makes an appearance.[1] Laing and Bateson’s relevance may not be immediately clear. Pickering’s interest in them derives from their extension of cybernetic ideas to the emerging technologies of the self in the 1960s. Both Bateson and Laing approached schizophrenia as an adaptation to the increasing “double-binds” of Western culture, and both looked to Eastern spiritual traditions and chemical methods of consciousness-alteration as potential treatments. The Bateson and Laing material makes the most direct reference to the connection between the cybernetic tradition and the “Californian Ideology” that animates much Silicon Valley thinking. Stewart Brand was influenced by Bateson’s Steps to an Ecology of Mind (183), for example. Pickering identifies Northern California as the site where cybernetics migrated into the counterculture. As a technology of control, it is arguable that this countercultural migration has become part of the ruling ideology of the present moment. Pickering recognizes this but seems to concede that the inherent topicality would detract from the focus of his work. It is a facet that would be of interest to the readers of this “Digital Studies” section of The b2 Review, however, and I will thus return to it at the end of this review.

    Pickering’s path to Bateson and Laing originates with Grey Walter’s and Ross Ashby’s pursuit of cybernetic models of the brain. Computational models of the brain, though originally informed by cybernetic research, quickly replaced it in Pickering’s account (62). He asks why computational models of the brain quickly gathered so much cultural interest. Rodney Brooks’s robots, with their more embodied approach, Pickering argues, are in the tradition of Walter’s tortoises and outside the symbolic tradition of artificial intelligence. I find it noteworthy that the neurological underpinnings of early cybernetics were so strongly influenced by behaviorism. Computationalist approaches, associated by Pickering with the establishment or “royal” science, here, were intellectually formed by an attack on behaviorism. Pickering even addresses this point obliquely, when he wonders why literary scholars had not noticed that the octopus in Gravity’s Rainbow was apparently named “Grigori” in homage to Gregory Bateson (439n13).[2] I think one reason this hasn’t been noticed is that it’s much more likely that the name was random but for its Slavic form, which is clearly in the same pattern of references to Russian behaviorist psychology that informs Pynchon’s novel. An offshoot of behaviorism inspiring a countercultural movement devoted to freedom and experimentation seems peculiar.

    One of Pickering’s key insights into this alternate tradition of cybernetics is that its science is performative. Rather than being as theory-laden as are the strictly computationalist approaches, cybernetic science often studied complex systems as assemblages whose interactions generated novel insights. Contrast this epistemology to what critics point to as the frequent invocation of the Duhem-Quine thesis by Noam Chomsky.[3] For Pickering, Ross Ashby’s version of cybernetics was a “supremely general and protean science” (147). As it developed, the brain lost its central place and cybernetics became a “freestanding general science” (147). As I mentioned, the chapter on Ashby closes with a consideration of the complexity science of Stuart Kauffman and Stephen Wolfram. That Kauffman and Wolfram largely have worked outside mainstream academic institutions is important for Pickering.[4] Christopher Alexander’s pattern language in architecture is a third example. Pickering mentions that Alexander’s concept was influential in some areas of computer science; the notion of “object-oriented programming” is sometimes considered to have been influenced by Alexander’s ideas.

    I mention this connection because many of the alternate traditions in cybernetics have become mainstream influences in contemporary digital culture. It is difficult to imagine Laing and Bateson’s alternative therapeutic ideas having any resonance in that culture, however. The doctrine that “selves are endlessly complex and endlessly explorable” (211) is sometimes proposed as something the internet facilitates, but the inevitable result of anonymity and pseudonymity in internet discourse is the enframing of hierarchical relations. I realize this point may sound controversial to those with a more benign or optimistic view of digital culture. That this countercultural strand of cybernetic practice has clear parallels with much digital libertarian rhetoric is hard to dispute. Again, Pickering is not concerned in the book with tracing these contemporary parallels. I mention them because of my own interest and this venue’s presumed interest in the subject.

    The progression that begins with some variety of conventional rationalism, extends through a career in cybernetics, and ends in some variety of mysticism is seen with almost all of the figures that Pickering profiles in The Cybernetic Brain. Perhaps the clearest example—and most fascinating in general—is that of Stafford Beer. Philip Mirowski’s review of Pickering’s book refers to Beer as “a slightly wackier Herbert Simon.” Pickering enjoys recounting the adventures of the wizard of Prang, a work that Beer composed after he had moved to a remote Welsh village and renounced many of the world’s pleasures. Beer’s involvement in Project Cybersyn makes him perhaps the most well-known of the figures profiled in this book.[5] What perhaps fascinate Pickering more than anything else in Beer’s work is the concept of viability. From early in his career, Beer advocated for upwardly viable management strategies. The firm would not need a brain, in his model, “it would react to changing circumstances; it would grow and evolve like an organism or species, all without any human intervention at all” (225). Mirowski’s review compares Beer to Friedrich Hayek and accuses Pickering of refusing to engage with this seemingly obvious intellectual affinity.[6] Beer’s intuitions in this area led him to experiment with biological and ecological computing; Pickering surmises that Douglas Adams’s superintelligent mice derived from Beer’s murine experiments in this area (241).

    In a review of a recent translation of Stanislaw Lem’s Summa Technologiae, Pickering mentions that natural adaptive systems being like brains and being able to be utilized for intelligence amplification is the most “amazing idea in the history of cybernetics” (247).[7] Despite its association with the dreaded “synergy” (the original “syn” of Project Cybersyn), Beer’s viable system model never became a management fad (256). Alexander Galloway has recently written here about the “reticular fallacy,” the notion that de-centralized forms of organization are necessarily less repressive than are centralized or hierachical forms. Beer’s viable system model proposes an emergent and non-hierarchical management system that would increase the general “eudemony” (general well-being, another of Beer’s not-quite original neologisms [272]). Beer’s turn towards Tantric mysticism seems somehow inevitable in Pickering’s narrative of his career. The syntegric icosahedron, one of Beer’s late baroque flourishes, reminded me quite a bit of a Paul Laffoley painting. Syntegration as a concept takes reticularity to a level of mysticism rarely achieved by digital utopians. Pickering concludes the chapter on Beer with a discussion of his influence on Brian Eno’s ambient music.

    Laffoley, "The Orgone Motor"
    Paul Laffoley, “The Orgone Motor” (1981). Image source: paullaffoley.net.

    The discussion of Eno chides him for not reading Gordon Pask’s explicitly aesthetic cybernetics (308). Pask is the final cybernetician of Pickering’s study and perhaps the most eccentric. Pickering describes him as a model for Patrick Troughton’s Dr. Who (475n3), and his synaesthetic work in cybernetics with projects like the Musicolor are explicitly theatrical. A theatrical performance that directly incorporates audience feedback into the production, not just at the level of applause or hiss, but in audience interest in a particular character—a kind of choose-your-own adventure theater—was planned with Joan Littlewood (348-49). Pask’s work in interface design has been identified as an influence on hypertext (464n17). A great deal of the chapter on Pask involves his influence on British countercultural arts and architecture movements in the 1960s. Mirowski’s review shortly notes that even the anti-establishment Gordon Pask was funded by the Office of Naval Research for fifteen years (194). Mirowski also accuses Pickering of ignoring the computer as the emblematic cultural artifact of the cybernetic worldview (195). Pask is the strongest example offered of an alternate future of computation and social organization, but it is difficult to imagine his cybernetic present.

    The final chapter of Pickering’s book is entitled “Sketches of Another Future.” What is called “maker culture” combined with the “internet of things” might lead some prognosticators to imagine an increasingly cybernetic digital future. Cybernetic, that is, not in the sense of increasing what Mirowski refers to as the neoliberal “background noise of modern culture” but as a “challenge to the hegemony of modernity” (393). Before reading Pickering’s book, I would have regarded such a prediction with skepticism. I still do, but Pickering has argued that an alternate—and more optimistic—perspective is worth taking seriously.

    _____

    Jonathan Goodwin is Associate Professor of English at the University of Louisiana, Lafayette. He is working on a book about cultural representations of statistics and probability in the twentieth century.

    Back to the essay

    _____

    [1] Wolfram was born in England, though he has lived in the United States since the 1970s. Pickering taught at the University of Illinois while this book was being written, and he mentions having several interviews with Wolfram, whose company Wolfram Research is based in Champaign, Illinois (457n73). Pickering’s discussion of Wolfram’s A New Kind of Science is largely neutral; for a more skeptical view, see Cosma Shalizi’s review.

    [2] Bateson experimented with octopuses, as Pickering describes. Whether Pynchon knew about this, however, remains doubtful. Pickering’s note may also be somewhat facetious.

    [3] See the interview with George Lakoff in Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates, ed. Geoffrey J. Huck and John A. Goldsmith (New York: Routledge, 1995), p. 115. Lakoff’s account of Chomsky’s philosophical justification for his linguistic theories is tendentious; I mention it here because of the strong contrast, even in caricature, with the performative quality of the cybernetic research Pickering describes. (1999).

    [4] Though it is difficult to think of the Santa Fe Institute this way now.

    [5] For a detailed cultural history of Project Cybersyn, see Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (MIT Press, 2011). Medina notes that Beer formed the word “algedonic” from two words meaning “pain” and “pleasure,” but the OED notes an example in the same sense from 1894. This citation does not rule out independent coinage, of course. Curiously enough, John Fowles uses the term in The Magus (1966), where it could have easily been derived from Beer.

    [6] Hayek’s name appears neither in the index nor the reference list. It does seem a curious omission in the broader intellectual context of cybernetics.

    [7] Though there is a reference to Lem’s fiction in an endnote (427n25), Summa Technologiae, a visionary exploration of cybernetic philosophy dating from the early 1960s, does not appear in Pickering’s work. A complete English translation only recently appeared, and I know of no evidence that Pickering’s principal figures were influenced by Lem at all. The book, as Pickering’s review acknowledges, is astonishingly prescient and highly recommended for anyone interested in the culture of cybernetics.

  • What Drives Automation?

    What Drives Automation?

    glass-cagea review of Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
    by Mike Bulajewski
    ~

    Debates about digital technology are often presented in terms of stark polar opposites: on one side, cyber-utopians who champion the new and the cutting edge, and on the other, cyber-skeptics who hold on to obsolete technology. The framing is one-dimensional in the general sense that it is superficial, but also in a more precise and mathematical sense that it implicitly treats the development of technology as linear. Relative to the present, there are only two possible positions and two possible directions to move; one can be either for or against, ahead or behind.[1]

    Although often invoked as a prelude to transcending the division and offering a balanced assessment, in describing the dispute in these pro or con terms one has already betrayed one’s orientation, tilting the field against the critical voice by assigning it an untenable position. Criticizing a new technology is misconstrued as a simple defense of the old technology or of no technology, which turns legitimate criticism into mere conservative fustiness, a refusal to adapt and a failure to accept change.

    Few critics of technology match these descriptions, and those who do, like the anarcho-primitivists who claim to be horrified by contemporary technology, nonetheless accede to the basic framework set by technological apologists. The two sides disagree only on the preferred direction of travel, making this brand of criticism more pro-technology than it first appears. One should not forget that the high-tech futurism of Silicon Valley is supplemented by the balancing counterweight of countercultural primitivism, with Burning Man expeditions, technology-free Waldorf schools for children of tech workers, spouses who embrace premodern New Age beliefs, romantic agrarianism, and restorative digital detox retreats featuring yoga and meditation. The diametric opposition between pro- and anti-technology is internal to the technology industry, perhaps a symptom of the repression of genuine debate about the merits of its products.

    ***

    Nicholas Carr’s most recent book, The Glass Cage: Automation and Us, is a critique of the use of automation and a warning of its human consequences, but to conclude, as some reviewers have, that he is against automation or against technology as such is to fall prey to this one-dimensional fallacy.[2]

    The book considers the use of automation in areas like medicine, architecture, finance, manufacturing and law, but it begins with an example that’s closer to home for most of us: driving a car. Transportation and wayfinding are minor themes throughout the book, and with Google and large automobile manufacturers promising to put self-driving cars on the street within a decade, the impact of automation in this area may soon be felt in our daily lives like never before. Early in the book, we are introduced to problems that human factors engineers working with airline autopilot systems have discovered and may be forewarnings of a future of the unchecked automating of transportation.

    Carr discusses automation bias—the tendency for operators to assume the system is correct and external signals that contradict it are wrong—and the closely related problem of automation complacency, which occurs when operators assume the system is infallible and so abandon their supervisory role. These problems have been linked to major air disasters and are behind less-catastrophic events like oblivious drivers blindly following their navigation systems into nearby lakes or down flights of stairs.

    The chapter dedicated to deskilling is certain to raise the ire of skeptical readers because it begins with an account of the negative impact of GPS technology on Inuit hunters who live in the remote northern reaches of Canada. As GPS devices proliferated, hunters lost what a tribal elder describes as “the wisdom and knowledge of the Inuit”: premodern wayfinding methods that rely on natural phenomena like wind, stars, tides and snowdrifts to navigate. Inuit wayfinding skills are truly impressive. The anthropologist Claudio Aporta reports traveling with a hunter across twenty square kilometers of flat featureless land as he located seven fox traps that he had never seen before, set by his uncle twenty five years prior. These talents have been eroded as Inuit hunters have adopted GPS devices that seem to do the job equally well, but have the unexpected side effect of increasing injuries and deaths as hunters succumb to equipment malfunctions and the twin perils of automation complacency and bias.

    Laboring under the misconceptions of the one-dimensional fallacy, it would be natural to take this as a smoking gun of Carr’s alleged anti-technology perspective and privileging of the premodern, but the closing sentences of the chapter point us away from that conclusion:

    We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us. (151)

    These words segue into the following chapter, where Carr identifies the dominant philosophy that designs automation technologies to inadvertently produce problems that he identified earlier: technology-centered automation. This approach to design is distrustful of humans, perhaps even misanthropic. It views us as weak, inefficient, unreliable and error-prone, and seeks to minimize our involvement in the work to be done. It institutes a division of labor between human and machine that gives the bulk of the work over to the machine, only seeking human input in anomalous situations. This philosophy is behind modern autopilot systems that hand off control to human pilots for only a few minutes in a flight.

    The fundamental argument of the book is that this design philosophy can lead to undesirable consequences. Carr seeks an alternative he calls human-centered automation, an approach that ensures the human operator remains engaged and alert. Autopilot systems designed with this philosophy might return manual control to the pilots at irregular intervals to ensure they remain vigilant and practice their flying skills. It could provide tactile feedback of its operations so that pilots are involved in a visceral way rather than passively monitoring screens. Decision support systems like those used in healthcare could take a secondary role of reviewing and critiquing a decision made by a doctor made rather than the other way around.

    The Glass Cage calls for a fundamental shift in how we understand error. Under the current regime, an error is an inefficiency or an inconvenience, to be avoided at all costs. As defined by Carr, a human-centered approach to design treats error differently, viewing it as an opportunity for learning. He illustrates this with a personal experience of repeatedly failing a difficult mission in the video game Red Dead Redemption, and points to the satisfaction of finally winning a difficult game as an example of what is lost when technology is designed to be too easy. He offers video games as a model for the kinds of technologies he would like to see: tools that engage us in difficult challenges, that encourage us to develop expertise and to experience flow states.

    But Carr has an idiosyncratic definition of human-centered design which becomes apparent when he counterposes his position against the prominent design consultant Peter Merholz. Echoing premises almost universally adopted by designers, Merholz calls for simple, frictionless interfaces and devices that don’t require a great deal of skill, memorization or effort to operate. Carr objects that that eliminates learning, skill building and mental engagement—perhaps a valid criticism, but it’s strange to suggest that this reflects a misanthropic technology-centered approach.

    A frequently invoked maxim of human-centered design is that technology should adapt to people, rather than people adapting to technology. In practice, the primary consideration is helping the user achieve his or her goal as efficiently and effectively as possible, removing unnecessary obstacles and delays that stand in the way. Carr argues for the value of challenges, difficulties and demands placed on users to learn and hone skills, all of which fall under the prohibited category of people adapting to technology.

    In his example of playing Red Dead Redemption, Carr prizes the repeated failure and frustration before finally succeeding at the game. From the lens of human-centered design, that kind of experience is seen as a very serious problem that should be eliminated quickly, which is probably why this kind of design is rarely employed at game studios. In fact, it doesn’t really make sense to think of a game player as having a goal, at least not from the traditional human-centered standpoint. The driver of a car has a goal: to get from point A to point B; a Facebook user wants to share pictures with friends; the user of a word processor wants to write a document; and so on. As designers, we want to make these tasks easy, efficient and frictionless. The most obvious way of framing game play is to say that the player’s goal is to complete the game. We would then proceed to remove all obstacles, frustrations, challenges and opportunities for error that stand in the way so that they may accomplish this goal more efficiently, and then there would be nothing left for them to do. We would have ruined the game.

    This is not necessarily the result of a misanthropic preference for technology over humanity, though it may be. It is also the likely outcome of a perfectly sincere and humanistic belief that we shouldn’t inconvenience the user with difficulties that stand in the way of their goal. As human factors researcher David Woods puts it, “The road to technology-centered systems is paved with human-centered intentions,”[3] a phrasing which suggests that these two philosophies aren’t quite as distinct as Carr would have us believe.

    Carr’s vision of human-centered design differs markedly from contemporary design practice, which stresses convenience, simplicity, efficiency for the user and ease of use. In calling for less simplicity and convenience, he is in effect critical of really existing human-centeredness, and that troubles any reading of The Glass Cage that views it a book about restoring our humanity in a world driven mad by machines.

    It might be better described as a book about restoring one conception of humanity in a world driven mad by another. It is possible to argue that the difference between the two appears in psychoanalytic theory as the difference between drive and desire. The user engages with a technology in order to achieve a goal because they perceive themselves as lacking something. Through the use of this tool, they believe they can regain it and fill in this lack. It follows that designers ought to help the user achieve their goal—to reach their desire—as quickly and efficiently as possible because this will satisfy them and make them happy.

    But the insight of psychoanalysis is that lack is ontological and irreducible, it cannot be filled in any permanent way because any concrete lack we experience is in fact metonymic for a constitutive lack of being. As a result, as desiring subjects we are caught in an endless loop of seeking out that object of desire, feeling disappointed when we find it because it didn’t fulfill our fantasies and then finding a new object to chase. The alternative is to shift from desire to drive, turning this failure into a triumph. Slavoj Žižek describes drive as follows: “the very failure to reach its goal, the repetition of this failure, the endless circulation around the object, generates a satisfaction of its own.”[4]

    This satisfaction is perhaps what Carr aims at when he celebrates the frustrations and challenges of video games and of work in general. That video games can’t be made more efficient without ruining them indicates that what players really want is for their goal to be thwarted, evoking the psychoanalytic maxim that summarizes the difference between desire and drive: from the missing/lost object, to loss itself as an object. This point is by no means tangential. Early on, Carr introduces the concept of miswanting, defined as the tendency to desire what we don’t really want and won’t make us happy—in this case, leisure and ease over work and challenge. Psychoanalysts holds that all human wanting (within the register of desire) is miswanting. Through fantasy, we imagine an illusory fullness or completeness of which actual experience always falls short.[5]

    Carr’s revaluation of challenge, effort and, ultimately, dissatisfaction cannot represent a correction of the error of miswanting­–of rediscovering the true source of pleasure and happiness in work. Instead, it radicalizes the error: we should learn to derive a kind of satisfaction from our failure to enjoy. Or, in the final chapter, as Carr says of the farmer in Robert Frost’s poem Mowing, who is hard at work and yet far from the demands of productivity: “He’s not seeking some greater truth beyond the work. The work is the truth.”

    ***

    Nicholas Carr has a track record of provoking designers to rethink their assumptions. With The Shallows, along with other authors making related arguments, he influenced software developers to create a new class of tools that cut off the internet, eliminate notifications or block social media web sites to help us concentrate. Starting with OS X Lion in 2011, Apple began offering a full screen mode that hides distracting interface elements and background windows from inactive applications.

    What transformative effects could The Glass Cage have on the way software is designed? The book certainly offers compelling reasons to question whether ease of use should always be paramount. Advocates for simplicity are rarely challenged, but they may now find themselves facing unexpected objections. Software could become more challenging and difficult to use—not in the sense of a recalcitrant WiFi router that emits incomprehensible error codes, but more like a game. Designers might draw inspiration from video games, perhaps looking to classics like the first level of Super Mario Brothers, a masterpiece of level design that teaches the fundamental rules of the game without ever requiring the player to read the manual or step through a tutorial.

    Everywhere that automation now reigns, new possibilities announce themselves. A spell checker might stop to teach spelling rules, or make a game out of letting the user take a shot at correcting mistakes it has detected. What if there was a GPS navigation device that enhanced our sense of spatial awareness rather than eroding it, that engaged our attention on to the road rather than let us tune out. Could we build an app that helps drivers maintain good their skills by challenging them to adopt safer and more fuel-efficient driving techniques?

    Carr points out that the preference for easy-to-use technologies that reduce users’ engagement is partly a consequence of economic interests and cost reduction policies that profit from the deskilling and reduction of the workforce, and these aren’t dislodged simply by pressing for new design philosophies. But to his credit, Carr has written two best-selling books aimed at the general interest reader on the fairly obscure topic of human-computer interaction. User experience designers working in the technology industry often face an uphill battle in trying to build human-centered products (however that is defined). When these matters attract public attention and debate, it makes their job a little easier.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org. He has previously written about the Spike Jonze film Her for The b2 Review Digital Studies section.

    Back to the essay

    _____

    [1] Differences between individual technologies are ignored and replaced by the monolithic master category of Technology. Jonah Lehrer’s review of Nicholas Carr’s 2010 book The Shallows in the New York Times exemplifies such thinking. Lehrer finds contradictory evidence against Carr’s argument that the internet is weakening our mental faculties in scientific studies that attribute cognitive improvements to playing video games, a non sequitur which gains meaning only by subsuming these two very different technologies under a single general category of Technology. Evgeny Morozov is one of the sharpest critics of this tendency. Here one is reminded of his retort in his article “Ghosts in the Machine” (2013): “That dentistry has been beneficial to civilization tells us very little about the wonders of data-mining.”

    [2] There are a range of possible causes for this constrictive linear geometry: a tendency to see a progressive narrative of history; a consumerist notion of agency which only allows shoppers to either upgrade or stick with what they have; or the oft-cited binary logic of digital technology. One may speculate about the influence of the popular technology marketing book by Geoffrey A. Moore, Crossing the Chasm (2014) whose titular chasm is the gap between the elite group of innovators and early adopters—the avant-garde—and the recalcitrant masses bringing up the rear who must be persuaded to sign on to their vision.

    [3] David D. Woods and David Tinapple (1999). “W3: Watching Human Factors Watch People at Work.” Proceedings of the 43rd Annual Meeting of the Human Factors and Ergonomics Society (1999).

    [4] Slavoj Žižek, The Parallax View (2006), 63.

    [5] The cultural and political implications of this shift are explored at length in Todd McGowan’s two books The End of Dissatisfaction: Jacques Lacan and the Emerging Society of Enjoyment (2003) and Enjoying What We Don’t Have: The Political Project of Psychoanalysis (2013).

  • Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Dataclysm: Who We Are (When We Think No One's Looking) (Crown, 2014)a review of Christian Rudder, Dataclysm: Who We Are (When We Think No One’s Looking) (Crown, 2014)
    by Cathy O’Neil
    ~
    Here’s what I’ve spent the last couple of days doing: alternatively reading Christian Rudder’s new book Dataclysm and proofreading a report by AAPOR which discusses the benefits, dangers, and ethics of using big data, which is mostly “found” data originally meant for some other purpose, as a replacement for public surveys, with their carefully constructed data collection processes and informed consent. The AAPOR folk have asked me to provide tangible examples of the dangers of using big data to infer things about public opinion, and I am tempted to simply ask them all to read Dataclysm as exhibit A.

    Rudder is a co-founder of OKCupid, an online dating site. His book mainly pertains to how people search for love and sex online, and how they represent themselves in their profiles.

    Here’s something that I will mention for context into his data explorations: Rudder likes to crudely provoke, as he displayed when he wrote this recent post explaining how OKCupid experiments on users. He enjoys playing the part of the somewhat creepy detective, peering into what OKCupid users thought was a somewhat private place to prepare themselves for the dating world. It’s the online equivalent of a video camera in a changing booth at a department store, which he defended not-so-subtly on a recent NPR show called On The Media, and which was written up here.

    I won’t dwell on that aspect of the story because I think it’s a good and timely conversation, and I’m glad the public is finally waking up to what I’ve known for years is going on. I’m actually happy Rudder is so nonchalant about it because there’s no pretense.

    Even so, I’m less happy with his actual data work. Let me tell you why I say that with a few examples.

    Who Are OKCupid Users?

    I spent a lot of time with my students this summer saying that a standalone number wouldn’t be interesting, that you have to compare that number to some baseline that people can understand. So if I told you how many black kids have been stopped and frisked this year in NYC, I’d also need to tell you how many black kids live in NYC for you to get an idea of the scope of the issue. It’s a basic fact about data analysis and reporting.

    When you’re dealing with populations on dating sites and you want to conclude things about the larger culture, the relevant “baseline comparison” is how well the members of the dating site represent the population as a whole. Rudder doesn’t do this. Instead he just says there are lots of OKCupid users for the first few chapters, and then later on after he’s made a few spectacularly broad statements, on page 104 he compares the users of OKCupid to the wider internet users, but not to the general population.

    It’s an inappropriate baseline, made too late. Because I’m not sure about you but I don’t have a keen sense of the population of internet users. I’m pretty sure very young kids and old people are not well represented, but that’s about it. My students would have known to compare a population to the census. It needs to happen.

    How Do You Collect Your Data?

    Let me back up to the very beginning of the book, where Rudder startles us by showing us that the men that women rate “most attractive” are about their age whereas the women that men rate “most attractive” are consistently 20 years old, no matter how old the men are.

    Actually, I am projecting. Rudder never actually specifically tells us what the rating is, how it’s exactly worded, and how the profiles are presented to the different groups. And that’s a problem, which he ignores completely until much later in the book when he mentions that how survey questions are worded can have a profound effect on how people respond, but his target is someone else’s survey, not his OKCupid environment.

    Words matter, and they matter differently for men and women. So for example, if there were a button for “eye candy,” we might expect women to choose more young men. If my guess is correct, and the term in use is “most attractive”, then for men it might well trigger a sexual concept whereas for women it might trigger a different social construct; indeed I would assume it does.

    Since this isn’t a porn site, it’s a dating site, we are not filtering for purely visual appeal; we are looking for relationships. We are thinking beyond what turns us on physically and asking ourselves, who would we want to spend time with? Who would our family like us to be with? Who would make us be attractive to ourselves? Those are different questions and provoke different answers. And they are culturally interesting questions, which Rudder never explores. A lost opportunity.

    Next, how does the recommendation engine work? I can well imagine that, once you’ve rated Profile A high, there is an algorithm that finds Profile B such that “people who liked Profile A also liked Profile B”. If so, then there’s yet another reason to worry that such results as Rudder described are produced in part as a result of the feedback loop engendered by the recommendation engine. But he doesn’t explain how his data is collected, how it is prompted, or the exact words that are used.

    Here’s a clue that Rudder is confused by his own facile interpretations: men and women both state that they are looking for relationships with people around their own age or slightly younger, and that they end up messaging people slightly younger than they are but not many many years younger. So forty year old men do not message twenty year old women.

    Is this sad sexual frustration? Is this, in Rudder’s words, the difference between what they claim they want and what they really want behind closed doors? Not at all. This is more likely the difference between how we live our fantasies and how we actually realistically see our future.

    Need to Control for Population

    Here’s another frustrating bit from the book: Rudder talks about how hard it is for older people to get a date but he doesn’t correct for population. And since he never tells us how many OKCupid users are older, nor does he compare his users to the census, I cannot infer this.

    Here’s a graph from Rudder’s book showing the age of men who respond to women’s profiles of various ages:

    dataclysm chart 1

    We’re meant to be impressed with Rudder’s line, “for every 100 men interested in that twenty year old, there are only 9 looking for someone thirty years older.” But here’s the thing, maybe there are 20 times as many 20-year-olds as there are 50-year-olds on the site? In which case, yay for the 50-year-old chicks? After all, those histograms look pretty healthy in shape, and they might be differently sized because the population size itself is drastically different for different ages.

    Confounding

    One of the worst examples of statistical mistakes is his experiment in turning off pictures. Rudder ignores the concept of confounders altogether, which he again miraculously is aware of in the next chapter on race.

    To be more precise, Rudder talks about the experiment when OKCupid turned off pictures. Most people went away when this happened but certain people did not:

    dataclysm chart 2

    Some of the people who stayed on went on a “blind date.” Those people, which Rudder called the “intrepid few,” had a good time with people no matter how unattractive they were deemed to be based on OKCupid’s system of attractiveness. His conclusion: people are preselecting for attractiveness, which is actually unimportant to them.

    But here’s the thing, that’s only true for people who were willing to go on blind dates. What he’s done is select for people who are not superficial about looks, and then collect data that suggests they are not superficial about looks. That doesn’t mean that OKCupid users as a whole are not superficial about looks. The ones that are just got the hell out when the pictures went dark.

    Race

    This brings me to the most interesting part of the book, where Rudder explores race. Again, it ends up being too blunt by far.

    Here’s the thing. Race is a big deal in this country, and racism is a heavy criticism to be firing at people, so you need to be careful, and that’s a good thing, because it’s important. The way Rudder throws it around is careless, and he risks rendering the term meaningless by not having a careful discussion. The frustrating part is that I think he actually has the data to have a very good discussion, but he just doesn’t make the case the way it’s written.

    Rudder pulls together stats on how men of all races rate women of all races on an attractiveness scale of 1-5. It shows that non-black men find their own race attractive and non-black men find black women, in general, less attractive. Interesting, especially when you immediately follow that up with similar stats from other U.S. dating sites and – most importantly – with the fact that outside the U.S., we do not see this pattern. Unfortunately that crucial fact is buried at the end of the chapter, and instead we get this embarrassing quote right after the opening stats:

    And an unintentionally hilarious 84 percent of users answered this match question:

    Would you consider dating someone who has vocalized a strong negative bias toward a certain race of people?

    in the absolute negative (choosing “No” over “Yes” and “It depends”). In light of the previous data, that means 84 percent of people on OKCupid would not consider dating someone on OKCupid.

    Here Rudder just completely loses me. Am I “vocalizing” a strong negative bias towards black women if I am a white man who finds white women and Asian women hot?

    Especially if you consider that, as consumers of social platforms and sites like OKCupid, we are trained to rank all the products we come across to ultimately get better offerings, it is a step too far for the detective on the other side of the camera to turn around and point fingers at us for doing what we’re told. Indeed, this sentence plunges Rudder’s narrative deeply into the creepy and provocative territory, and he never fully returns, nor does he seem to want to. Rudder seems to confuse provocation for thoughtfulness.

    This is, again, a shame. A careful conversation about the issues of what we are attracted to, what we can imagine doing, and how we might imagine that will look to our wider audience, and how our culture informs those imaginings, are all in play here, and could have been drawn out in a non-accusatory and much more useful way.


    _____

    Cathy O’Neil is a data scientist and mathematician with experience in academia and the online ad and finance industries. She is one of the most prominent and outspoken women working in data science today, and was one of the guiding voices behind Occupy Finance, a book produced by the Occupy Wall Street Alt Banking group. She is the author of “On Being a Data Skeptic” (Amazon Kindle, 2013), and co-author with Rachel Schutt of Doing Data Science: Straight Talk from the Frontline (O’Reilly, 2013). Her Weapons of Math Destruction is forthcoming from Random House. She appears on the weekly Slate Money podcast hosted by Felix Salmon. She maintains the widely-read mathbabe blog, on which this review first appeared.

    Back to the essay

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews