boundary 2

Tag: new media

  • Something About the Digital

    Something About the Digital

    By Alexander R. Galloway
    ~

    (This catalog essay was written in 2011 for the exhibition “Chaos as Usual,” curated by Hanne Mugaas at the Bergen Kunsthall in Norway. Artists in the exhibition included Philip Kwame Apagya, Ann Craven, Liz Deschenes, Thomas Julier [in collaboration with Cédric Eisenring and Kaspar Mueller], Olia Lialina and Dragan Espenschied, Takeshi Murata, Seth Price, and Antek Walczak.)

    There is something about the digital. Most people aren’t quite sure what it is. Or what they feel about it. But something.

    In 2001 Lev Manovich said it was a language. For Steven Shaviro, the issue is being connected. Others talk about “cyber” this and “cyber” that. Is the Internet about the search (John Battelle)? Or is it rather, even more primordially, about the information (James Gleick)? Whatever it is, something is afoot.

    What is this something? Given the times in which we live, it is ironic that this term is so rarely defined and even more rarely defined correctly. But the definition is simple: the digital means the one divides into two.

    Digital doesn’t mean machine. It doesn’t mean virtual reality. It doesn’t even mean the computer – there are analog computers after all, like grandfather clocks or slide rules. Digital means the digits: the fingers and toes. And since most of us have a discrete number of fingers and toes, the digital has come to mean, by extension, any mode of representation rooted in individually separate and distinct units. So the natural numbers (1, 2, 3, …) are aptly labeled “digital” because they are separate and distinct, but the arc of a bird in flight is not because it is smooth and continuous. A reel of celluloid film is correctly called “digital” because it contains distinct breaks between each frame, but the photographic frames themselves are not because they record continuously variable chromatic intensities.

    We must stop believing the myth, then, about the digital future versus the analog past. For the digital died its first death in the continuous calculus of Newton and Leibniz, and the curvilinear revolution of the Baroque that came with it. And the digital has suffered a thousand blows since, from the swirling vortexes of nineteenth-century thermodynamics, to the chaos theory of recent decades. The switch from analog computing to digital computing in the middle twentieth century is but a single battle in the multi-millennial skirmish within western culture between the unary and the binary, proportion and distinction, curves and jumps, integration and division – in short, over when and how the one divides into two.

    What would it mean to say that a work of art divides into two? Or to put it another way, what would art look like if it began to meditate on the one dividing into two? I think this is the only way we can truly begin to think about “digital art.” And because of this we shall leave Photoshop, and iMovie, and the Internet and all the digital tools behind us, because interrogating them will not nearly begin to address these questions. Instead look to Ann Craven’s paintings. Or look to the delightful conversation sparked here between Philip Kwame Apagya and Liz Deschenes. Or look to the work of Thomas Julier, even to a piece of his not included in the show, “Architecture Reflecting in Architecture” (2010, made with Cedric Eisenring), which depicts a rectilinear cityscape reflected inside the mirror skins of skyscrapers, just like Saul Bass’s famous title sequence in North By Northwest (1959).

    DSC_0002__560
    Liz Deschenes, “Green Screen 4” (2001)

    All of these works deal with the question of twoness. But it is twoness only in a very particular sense. This is not the twoness of the doppelganger of the romantic period, or the twoness of the “split mind” of the schizophrenic, and neither is it the twoness of the self/other distinction that so forcefully animated culture and philosophy during the twentieth century, particularly in cultural anthropology and then later in poststructuralism. Rather we see here a twoness of the material, a digitization at the level of the aesthetic regime itself.

    Consider the call and response heard across the works featured here by Apagya and Deschenes. At the most superficial level, one might observe that these are works about superimposition, about compositing. Apagya’s photographs exploit one of the oldest and most useful tricks of picture making: superimpose one layer on top of another layer in order to produce a picture. Painters do this all the time of course, and very early on it became a mainstay of photographic technique (even if it often remained relegated to mere “trick” photography), evident in photomontage, spirit photography, and even the side-by-side compositing techniques of the carte de visite popularized by André-Adolphe-Eugène Disdéri in the 1850s. Recall too that the cinema has made productive use of superimposition, adopting the technique with great facility from the theater and its painted scrims and moving backdrops. (Perhaps the best illustration of this comes at the end of A Night at the Opera [1935], when Harpo Marx goes on a lunatic rampage through the flyloft during the opera’s performance, raising and lowering painted backdrops to great comic effect.) So the more “modern” cinematic techniques of, first, rear screen projection, and then later chromakey (known commonly as the “green screen” or “blue screen” effect), are but a reiteration of the much longer legacy of compositing in image making.

    Deschenes’ “Green Screen #4” points to this broad aesthetic history, as it empties out the content of the image, forcing us to acknowledge the suppressed color itself – in this case green, but any color will work. Hence Deschenes gives us nothing but a pure background, a pure something.

    Allowed to curve gracefully off the wall onto the floor, the green color field resembles the “sweep wall” used commonly in portraiture or fashion photography whenever an artist wishes to erase the lines and shadows of the studio environment. “Green Screen #4” is thus the antithesis of what has remained for many years the signal art work about video chromakey, Peter Campus’ “Three Transitions” (1973). Whereas Campus attempted to draw attention to the visual and spatial paradoxes made possible by chromakey, and even in so doing was forced to hide the effect inside the jittery gaps between images, Deschenes by contrast feels no such anxiety, presenting us with the medium itself, minus any “content” necessary to fuel it, minus the powerful mise en abyme of the Campus video, and so too minus Campus’ mirthless autobiographical staging. If Campus ultimately resolves the relationship between images through a version of montage, Deschenes offers something more like a “divorced digitality” in which no two images are brought into relation at all, only the minimal substrate remains, without input or output.

    The sweep wall is evident too in Apagya’s images, only of a different sort, as the artifice of the various backgrounds – in a nod not so much to fantasy as to kitsch – both fuses with and separates from the foreground subject. Yet what might ultimately unite the works by Apagya and Deschenes is not so much the compositing technique, but a more general reference, albeit oblique but nevertheless crucial, to the fact that such techniques are today entirely quotidian, entirely usual. These are everyday folk techniques through and through. One needs only a web cam and simple software to perform chromakey compositing on a computer, just as one might go to the county fair and have one’s portrait superimposed on the body of a cartoon character.

    What I’m trying to stress here is that there is nothing particularly “technological” about digitality. All that is required is a division from one to two – and by extension from two to three and beyond to the multiple. This is why I see layering as so important, for it spotlights an internal separation within the image. Apagya’s settings are digital, therefore, simply by virtue of the fact that he addresses our eye toward two incompatible aesthetic zones existing within the image. The artifice of a painted backdrop, and the pose of a person in a portrait.

    Certainly the digital computer is “digital” by virtue of being binary, which is to say by virtue of encoding and processing numbers at the lowest levels using base-two mathematics. But that is only the most prosaic and obvious exhibit of its digitality. For the computer is “digital” too in its atomization of the universe, into, for example, a million Facebook profiles, all equally separate and discrete. Or likewise “digital” too in the computer interface itself which splits things irretrievably into cursor and content, window and file, or even, as we see commonly in video games, into heads-up-display and playable world. The one divides into two.

    So when clusters of repetition appear across Ann Craven’s paintings, or the iterative layers of the “copy” of the “reconstruction” in the video here by Thomas Julier and Cédric Eisenring, or the accumulations of images that proliferate in Olia Lialina and Dragon Espenschied’s “Comparative History of Classic Animated GIFs and Glitter Graphics” [2007] (a small snapshot of what they have assembled in their spectacular book from 2009 titled Digital Folklore), or elsewhere in works like Oliver Laric’s clipart videos (“787 Cliparts” [2006] and “2000 Cliparts” [2010]), we should not simply recall the famous meditations on copies and repetitions, from Walter Benjamin in 1936 to Gilles Deleuze in 1968, but also a larger backdrop that evokes the very cleavages emanating from western metaphysics itself from Plato onward. For this same metaphysics of division is always already a digital metaphysics as it forever differentiates between subject and object, Being and being, essence and instance, or original and repetition. It shouldn’t come as a surprise that we see here such vivid aesthetic meditations on that same cleavage, whether or not a computer was involved.

    Another perspective on the same question would be to think about appropriation. There is a common way of talking about Internet art that goes roughly as follows: the beginning of net art in the middle to late 1990s was mostly “modernist” in that it tended to reflect back on the possibilities of the new medium, building an aesthetic from the material affordances of code, screen, browser, and jpeg, just as modernists in painting or literature built their own aesthetic style from a reflection on the specific affordances of line, color, tone, or timbre; whereas the second phase of net art, coinciding with “Web 2.0” technologies like blogging and video sharing sites, is altogether more “postmodern” in that it tends to co-opt existing material into recombinant appropriations and remixes. If something like the “WebStalker” web browser or the Jodi.org homepage are emblematic of the first period, then John Michael Boling’s “Guitar Solo Threeway,” Brody Condon’s “Without Sun,” or the Nasty Nets web surfing club, now sadly defunct, are emblematic of the second period.

    I’m not entirely unsatisfied by such a periodization, even if it tends to confuse as many things as it clarifies – not entirely unsatisfied because it indicates that appropriation too is a technique of digitality. As Martin Heidegger signals, by way of his notoriously enigmatic concept Ereignis, western thought and culture was always a process in which a proper relationship of belonging is established in a world, and so too appropriation establishes new relationships of belonging between objects and their contexts, between artists and materials, and between viewers and works of art. (Such is the definition of appropriation after all: to establish a belonging.) This is what I mean when I say that appropriation is a technique of digitality: it calls out a distinction in the object from “where it was prior” to “where it is now,” simply by removing that object from one context of belonging and separating it out into another. That these two contexts are merely different – that something has changed – is evidence enough of the digitality of appropriation. Even when the act of appropriation does not reduplicate the object or rely on multiple sources, as with the artistic ready-made, it still inaugurates a “twoness” in the appropriated object, an asterisk appended to the art work denoting that something is different.

    TMu_Cyborg_2011_18-1024x682
    Takeshi Murata, “Cyborg” (2011)

    Perhaps this is why Takeshi Murata continues his exploration of the multiplicities at the core of digital aesthetics by returning to that age old format, the still life. Is not the still life itself a kind of appropriation, in that it brings together various objects into a relationship of belonging: fig and fowl in the Dutch masters, or here the various detritus of contemporary cyber culture, from cult films to iPhones?

    Because appropriation brings things together it must grapple with a fundamental question. Whatever is brought together must form a relation. These various things must sit side-by-side with each other. Hence one might speak of any grouping of objects in terms of their “parallel” nature, that is to say, in terms of the way in which they maintain their multiple identities in parallel.

    But let us dwell for a moment longer on these agglomerations of things, and in particular their “parallel” composition. By parallel I mean the way in which digital media tend to segregate and divide art into multiple, separate channels. These parallel channels may be quite manifest, as in the separate video feeds that make up the aforementioned “Guitar Solo Threeway,” or they may issue from the lowest levels of the medium, as when video compression codecs divide the moving image into small blocks of pixels that move and morph semi-autonomously within the frame. In fact I have found it useful to speak of this in terms of the “parallel image” in order to differentiate today’s media making from that of a century ago, which Friedrich Kittler and others have chosen to label “serial” after the serial sequences of the film strip, or the rat-ta-tat-tat of a typewriter.

    Thus films like Tatjana Marusic’s “The Memory of a Landscape” (2004) or Takeshi Murata’s “Monster Movie” (2005) are genuinely digital films, for they show parallelity in inscription. Each individual block in the video compression scheme has its own autonomy and is able to write to the screen in parallel with all the other blocks. These are quite literally, then, “multichannel” videos – we might even take a cue from online gaming circles and label them “massively multichannel” videos. They are multichannel not because they require multiple monitors, but because each individual block or “channel” within the image acts as an individual micro video feed. Each color block is its own channel. Thus, the video compression scheme illustrates, through metonymy, how pixel images work in general, and, as I suggest, it also illustrates the larger currents of digitality, for it shows that these images, in order to create “an” image must first proliferate the division of sub-images, which themselves ultimately coalesce into something resembling a whole. In other words, in order to create a “one” they must first bifurcate the single image source into two or more separate images.

    The digital image is thus a cellular and discrete image, consisting of separate channels multiplexed in tandem or triplicate or, greater, into nine, twelve, twenty-four, one hundred, or indeed into a massively parallel image of a virtually infinite visuality.

    For me this generates a more appealing explanation for why art and culture has, over the last several decades, developed a growing anxiety over copies, repetitions, simulations, appropriations, reenactments – you name it. It is common to attribute such anxiety to a generalized disenchantment permeating modern life: our culture has lost its aura and can no longer discern an original from a copy due to endless proliferations of simulation. Such an assessment is only partially correct. I say only partially because I am skeptical of the romantic nostalgia that often fuels such pronouncements. For who can demonstrate with certainty that the past carried with it a greater sense of aesthetic integrity, a greater unity in art? Yet the assessment begins to adopt a modicum of sense if we consider it from a different point of view, from the perspective of a generalized digitality. For if we define the digital as “the one dividing into two,” then it would be fitting to witness works of art that proliferate these same dualities and multiplicities. In other words, even if there was a “pure” aesthetic origin it was a digital origin to begin with. And thus one needn’t fret over it having infected our so-called contemporary sensibilities.

    Instead it is important not to be blinded by the technology. But rather to determine that, within a generalized digitality, there must be some kind of differential at play. There must be something different, and without such a differential it is impossible to say that something is something (rather than something else, or indeed rather than nothing). The one must divide into something else. Nothing less and nothing more is required, only a generic difference. And this is our first insight into the “something” of the digital.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Hacker, Hoaxer, Whistleblower, Spy
    a review of Gabriella Coleman, Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (Verso, 2014)
    by Gavin Mueller
    ~

    Gabriella Coleman’s Hacker, Hoaxer, Whistleblower, Spy (HHWS) tackles a difficult and pressing subject: the amorphous hacker organization Anonymous. The book is not a strictly academic work. Rather, it unfolds as a rather lively history of a subculture of geeks, peppered with snippets of cultural theory and autobiographical portions. As someone interested in a more sustained theoretical exposition of Anonymous’s organizing and politics, I was a bit disappointed, though Coleman has opted for a more readable style. In fact, this is the book’s best asset. However, while containing a number of insights of interest to the general reader, the book ultimately falters as an assessment of Anonymous’s political orientation, or the state of hacker politics in general.

    Coleman begins with a discussion of online trolling, a common antagonistic online cultural practice; many Anons cut their troll teeth at the notorious 4chan message board. Trolling aims to create “lulz,” a kind of digital schadenfreude produced by pranks, insults and misrepresentations. According to Coleman, the lulz are “a form of cultural differentiation and a tool or weapon used to attack, humiliate, and defame” rooted in the use of “inside jokes” of those steeped in the codes of Internet culture (32). Coleman argues that the lulz has a deeper significance: they “puncture the consensus around our politics and ethics, our social lives and our aesthetic sensibilities.” But trolling can be better understood through an offline frame of reference: hazing. Trolling is a means by which geeks have historically policed the boundaries of the subcultural corners of the Internet. If you can survive the epithets and obscene pictures, you might be able to hang. That trolling often takes the form of misogynist, racist and homophobic language is unsurprising: early Net culture was predominantly white and male, a demographic fact which overdetermines the shape of resentment towards “newbies” (or in 4chan’s unapologetically offensive argot, “newfags”). The lulz is joy that builds community, but almost always at someone else’s expense.

    Coleman, drawing upon her background as an anthropologist, conceptualizes the troll as an instantiation of the trickster archetype which recurs throughout mythology and folklore. Tricksters, she argues, like trolls and Anonymous, are liminal figures who defy norms and revel in causing chaos. This kind of application of theory is a common technique in cultural studies, where seemingly apolitical or even anti-social transgressions, like punk rock or skateboarding, can be politicized with a dash of Bakhtin or de Certeau. Here it creates difficulties. There is one major difference between the spider spirit Anansi and Coleman’s main informant on trolling, the white supremacist hacker weev: Anansi is fictional, while weev is a real person who writes op-eds for neo-Nazi websites. The trickster archetype, a concept crafted for comparative structural analysis of mythology, does little to explain the actually existing social practice of trolling. Instead it renders it more complicated, ambiguous, and uncertain. These difficulties are compounded as the analysis moves to Anonymous. Anonymous doesn’t merely enact a submerged politics via style or symbols. It engages in explicitly political projects, complete with manifestos, though Coleman continues to return to transgression as one of its salient features.

    The trolls of 4chan, from which Anonymous emerged, developed a culture of compulsory anonymity. In part, this was technological: unlike other message boards and social media, posting on 4chan requires no lasting profile, no consistent presence. But there was also a cultural element to this. Identifying oneself is strongly discouraged in the community. Fittingly, its major trolling weapon is doxing: revealing personal information to facilitate further harassment offline (prank calls, death threats, embarrassment in front of employers). As Whitney Phillips argues, online trolling often acts as a kind of media critique: by enforcing anonymity and rejecting fame or notoriety, Anons oppose the now-dominant dynamics of social media and personal branding which have colonized much of the web, and threaten their cherished subcultural practices, which are more adequately enshrined in formats such as image boards and IRC. In this way, Anonymous deploys technological means to thwart the dominant social practices of technology, a kind of wired Luddism. Such practices proliferate in the communities of the computer underground, which is steeped in an omnipresent prelapsarian nostalgia since at least the “eternal September” of the early 1990s.

    HHWS’s overarching narrative is the emergence of Anonymous out of the cesspits of 4chan and into political consciousness: trolling for justice instead of lulz. The compulsory anonymity of 4chan, in part, determined Anonymous’s organizational form: Anonymous lacks formal membership, instead formed from entirely ad hoc affiliations. The brand itself can be selectively deployed or disavowed, leading to much argumentation and confusion. Coleman provides an insider perspective on how actions are launched: there is debate, occasionally a rough consensus, and then activity, though several times individuals opt to begin an action, dragging along a number of other participants of varying degrees of reluctance. Tactics are formalized in an experimental, impromptu way. In this, I recognized the way actions formed in the Occupy encampments. Anonymous, as Coleman shows, was an early Occupy Wall Street booster, and her analysis highlights the connection between the Occupy form and the networked forms of sociality exemplified by Anonymous. After reading Coleman’s account, I am much more convinced of Anonymous’s importance to the movement. Likewise, many criticisms of Occupy could also be levelled at Anonymous; Coleman cites Jo Freeman’s “The Tyranny of Structurelessness” as one candidate.

    If Anonymous can be said to have a coherent political vision, it is one rooted in civil liberties, particularly freedom of speech and opposition censorship efforts. Indeed, Coleman earns the trust of several hackers by her affiliation with the Electronic Frontier Foundation, nominally the digital equivalent to the ACLU (though some object to this parallel, due in part to EFF’s strong ties to industry). Geek politics, from Anonymous to Wikileaks to the Pirate Bay, are a weaponized form of the mantra “information wants to be free.” Anonymous’s causes seem fit these concerns perfectly: Scientology’s litigious means of protecting its secrets provoked its wrath, as did the voluntary withdrawal of services to Wikileaks by PayPal and Mastercard, and the Bay Area Rapid Transit police’s blacking out of cell phone signals to scuttle a protest.

    I’ve referred to Anonymous as geeks rather than hackers deliberately. Hackers — skilled individuals who can break into protected systems — participate in Anonymous, but many of the Anons pulled from 4chan are merely pranksters with above-average knowledge of the Internet and computing. This gets the organization in quite a bit of trouble when it engages in the political tactic of most interest to Coleman, the distributed denial of service (DDoS) attack. A DDoS floods a website with requests, overwhelming its servers. This technique has captured the imaginations of a number of scholars, including Coleman, with its resemblance to offline direct action like pickets and occupations. However, the AnonOps organizers falsely claimed that their DDoS app, the Low-Orbit Ion Cannon, ensured user anonymity, leading to a number of Anons facing serious criminal charges. Coleman curiously places the blame for this startling breach of operational security on journalists writing about AnonOps, rather on the organizers themselves. Furthermore, many DDoS attacks, including those launched by Anonymous, have relied on botnets, which draw power from hundreds of hijacked computers, bears little resemblance to any kind of democratic initiative. Of course, this isn’t to say that the harsh punishments meted out to Anons under the auspices of the Computer Fraud and Abuse Act are warranted, but that political tactics must be subjected to scrutiny.

    Coleman argues that Anonymous outgrew its narrow civil libertarian agenda with its involvement in the Arab Spring: “No longer was the group bound to Internet-y issues like censorship and file-sharing” (148). However, by her own account, it is opposition to censorship which truly animates the group. The #OpTunisia manifesto (Anonymous names its actions with the prefix “Op,” for operations, along with the ubiquitous Twitter-based hashtag) states plainly, “Any organization involved in censorship will be targeted” (ibid). Anons were especially animated by the complete shut-off of the Internet in Tunisia and Egypt, actions which shattered the notion of the Internet as a space controlled by geeks, not governments. Anonymous operations launched against corporations did not oppose capitalist exploitation but fought corporate restrictions on online conduct. These are laudable goals, but also limited ones, and are often compatible with Silicon Valley companies, as illustrated by the Google-friendly anti-SOPA/PIPA protests.

    Coleman is eager to distance Anonymous from the libertarian philosophies rife in geek and hacker circles, but its politics are rarely incompatible with such a perspective. The most recent Guy Fawkes Day protest I witnessed in Washington, D.C., full of mask-wearing Anons, displayed a number of slogans emerging from the Ron Paul camp, “End the Fed” prominent among them. There is no accounting for this in HHWS. It is clear that political differences among Anons exists, and that any analysis must be nuanced. But Coleman’s description of this nuance ultimately doesn’t delineate the political positions within the group and how they coalesce, opting to elide these differences in favor of a more protean focus on “transgression.” In this way, she is able to provide a conceptual coherence for Anonymous, albeit at the expense of a detailed examination of the actual politics of its members. In the final analysis, “Anonymous became a generalized symbol for dissent, a medium to channel deep disenchantment… basically, with anything” (399).

    As political concerns overtake the lulz, Anonymous wavers as smaller militant hacker crews LulzSec and AntiSec take the fore, doxing white hat security executives, leaking documents, and defacing websites. This frustrates Coleman: “Anonymous had been exciting to me for a specific reason: it was the largest and most populist disruptive grassroots movement the Internet had, up to that time, fomented. But it felt, suddenly like AnonOps/Anonymous was slipping into a more familiar state of hacker-vanguardism” (302). Yet it is at this moment that Coleman offers a revealing account of hacker ideology: its alignment with the philosophy of Friedrich Nietzsche. From 4chan’s trolls scoffing at morality and decency, to hackers disregarding technical and legal restraints to accessing information, to the collective’s general rejection any standard form of accountability, Anonymous truly seems to posit itself as beyond good and evil. Coleman herself confesses to being “overtly romantic” as she supplies alibis for the group’s moral and strategic failures (it is, after all, incredibly difficult for an ethnographer to criticize her informants). But Nietzsche was a profoundly undemocratic thinker, whose avowed elitism should cast more of a disturbing shadow over the progressive potentials behind hacker groups than it does for Coleman, who embraces the ability of hackers to “cast off — at least momentarily — the shackles of normativity and attain greatness” (275). Coleman’s previous work on free software programmers convincingly makes the case for a Nietzschean current running through hacker culture; I am considerably more skeptical than she is about the liberal democratic viewpoint this engenders.

    Ultimately, Coleman concludes that Anonymous cannot work as a substitute for existing organizations, but that its tactics should be taken up by other political formations: “The urgent question is how to promote cross-pollination” between Anonymous and more formalized structures (374). This may be warranted, but there needs to be a fuller accounting of the drawbacks to Anonymous. Because anyone can fly its flag, and because its actions are guided by talented and charismatic individuals working in secret, Anonymous is ripe for infiltration. Historically, hackers have proven to be easy for law enforcement and corporations to co-opt, not the least because of the ferocious rivalries amongst hackers themselves. Tactics are also ambiguous. A DDoS can be used by anti-corporate activists, or by corporations against their rivals and enemies. Document dumps can ruin a diplomatic initiative, or a woman’s social life. Public square occupations can be used to advocate for democracy, or as a platform for anti-democratic coups. Currently, a lot of the same geek energy behind Anonymous has been devoted to the misogynist vendetta GamerGate (in a Reddit AMA, Coleman adopted a diplomatic tone, referring to GamerGate as “a damn Gordian knot”). Without a steady sense of Anonymous’s actual political commitments, outside of free speech, it is difficult to do much more than marvel at the novelty of their media presence (which wears thinner with each overwrought communique). With Hoaxer, Hacker, Whistleblower, Spy, Coleman has offered a readable account of recent hacker history, but I remain unconvinced of Anonymous’s political potential.

    _____

    Gavin Mueller (@gavinsaywhat) is a PhD candidate in cultural studies at George Mason University, and an editor at Jacobin and Viewpoint Magazine.

    Back to the essay

  • Network Pessimism

    Network Pessimism

    By Alexander R. Galloway
    ~

    I’ve been thinking a lot about pessimism recently. Eugene Thacker has been deep in this material for some time already. In fact he has a new, lengthy manuscript on pessimism called Infinite Resignation, which is a bit of departure from his other books in terms of tone and structure. I’ve read it and it’s excellent. Definitely “the worst” he’s ever written! Following the style of other treatises from the history of philosophical pessimism–Leopardi, Cioran, Schopenhauer, Kierkegaard, and others–the bulk of the book is written in short aphorisms. It’s very poetic language, and some sections are driven by his own memories and meditations, all in an attempt to plumb the deepest, darkest corners of the worst the universe has to offer.

    Meanwhile, the worst can’t stay hidden. Pessimism has made it to prime time, to NPR, and even right-wing media. Despite all this attention, Eugene seems to have little interest in showing his manuscript to publishers. A true pessimist! Not to worry, I’m sure the book will see the light of day eventually. Or should I say dead of night? When it does, the book is sure to sadden, discourage, and generally worsen the lives of Thacker fans everywhere.

    Interestingly pessimism also appears in a number of other authors and fields. I’m thinking, for instance, of critical race theory and the concept of Afro-pessimism. The work of Fred Moten and Frank B. Wilderson, III is particularly interesting in that regard. Likewise queer theory has often wrestled with pessimism, be it the “no future” debates around reproductive futurity, or what Anna Conlan has simply labeled “homo-pessimism,” that is, the way in which the “persistent association of homosexuality with death and oppression contributes to a negative stereotype of LGBTQ lives as unhappy and unhealthy.”[1]

    In his review of my new book, Andrew Culp made reference to how some of this material has influenced me. I’ll be posting more on Moten and these other themes in the future, but let me here describe, in very general terms, how the concept of pessimism might apply to contemporary digital media.

    *

    A previous post was devoted to the reticular fallacy, defined as the false assumption that the erosion of hierarchical organization leads to an erosion of organization as such. Here I’d like to address the related question of reticular pessimism or, more simply, network pessimism.

    Network pessimism relies on two basic assumptions: (1) “everything is a network”; (2) “the best response to networks is more networks.”

    Who says everything is a network? Everyone, it seems. In philosophy, Bruno Latour: ontology is a network. In literary studies, Franco Moretti: Hamlet is a network. In the military, Donald Rumsfeld: the battlefield is a network. (But so too our enemies are networks: the terror network.) Art, architecture, managerial literature, computer science, neuroscience, and many other fields–all have shifted prominently in recent years toward a network model. Most important, however, is the contemporary economy and the mode of production. Today’s most advanced companies are essentially network companies. Google monetizes the shape of networks (in part via clustering algorithms). Facebook has rewritten subjectivity and social interaction along the lines of canalized and discretized network services. The list goes on and on. Thus I characterize the first assumption — “everything is a network” — as a kind of network fundamentalism. It claims that whatever exists in the world appears naturally in the form of a system, an ecology, an assemblage, in short, as a network.

    Ladies and gentlemen, behold the good news, postmodernism is definitively over! We have a new grand récit. As metanarrative, the network will guide us into a new Dark Age.

    If the first assumption expresses a positive dogma or creed, the second is more negative or nihilistic. The second assumption — that the best response to networks is more networks — is also evident in all manner of social and political life today. Eugene and I described this phenomena at greater length in The Exploit, but consider a few different examples from contemporary debates… In military theory: network-centric warfare is the best response to terror networks. In Deleuzian philosophy: the rhizome is the best response to schizophrenic multiplicity. In autonomist Marxism: the multitude is the best response to empire. In the environmental movement: ecologies and systems are the best response to the systemic colonization of nature. In computer science: distributed architectures are the best response to bottlenecks in connectivity. In economics: heterogenous “economies of scope” are the best response to the distributed nature of the “long tail.”

    To be sure, there are many sites today where networks still confront power centers. The point is not to deny the continuing existence of massified, centralized sovereignty. But at the same time it’s important to contextualize such confrontations within a larger ideological structure, one that inoculates the network form and recasts it as the exclusive site of liberation, deviation, political maturation, complex thinking, and indeed the very living of life itself.

    Why label this a pessimism? For the same reasons that queer theory and critical race theory are grappling with pessimism: Is alterity a death sentence? Is this as good as it gets? Is this all there is? Can we imagine a parallel universe different from this one? (Although the pro-pessimism camp would likely state it in the reverse: We must destabilize and annihilate all normative descriptions of the “good.” This world isn’t good, and hooray for that!)

    So what’s the problem? Why should we be concerned about network pessimism? Let me state clearly so there’s no misunderstanding, pessimism isn’t the problem here. Likewise, networks are not the problem. (Let no one label me “anti network” nor “anti pessimism” — in fact I’m not even sure what either of those positions would mean.) The issue, as I see it, is that network pessimism deploys and sustains a specific dogma, confining both networks and pessimism to a single, narrow ideological position. It’s this narrow-mindedness that should be questioned.

    Specifically I can see three basic problems with network pessimism, the problem of presentism, the problem of ideology, and the problem of the event.

    The problem of presentism refers to the way in which networks and network thinking are, by design, allergic to historicization. This exhibits itself in a number of different ways. Networks arrive on the scene at the proverbial “end of history” (and they do so precisely because they help end this history). Ecological and systems-oriented thinking, while admittedly always temporal by nature, gained popularity as a kind of solution to the problems of diachrony. Space and landscape take the place of time and history. As Fredric Jameson has noted, the “spatial turn” of postmodernity goes hand in hand with a denigration of the “temporal moment” of previous intellectual movements.

    man machines buy fritz kahn
    Fritz Kahn, “Der Mensch als Industriepalast (Man as Industrial Palace)” (Stuttgart, 1926). Image source: NIH

    From Hegel’s history to Luhmann’s systems. From Einstein’s general relativity to Riemann’s complex surfaces. From phenomenology to assemblage theory. From the “time image” of cinema to the “database image” of the internet. From the old mantra always historicize to the new mantra always connect.

    During the age of clockwork, the universe was thought to be a huge mechanism, with the heavens rotating according to the music of the spheres. When the steam engine was the source of newfound power, the world suddenly became a dynamo of untold thermodynamic force. After full-fledged industrialization, the body became a factory. Technologies and infrastructures are seductive metaphors. So it’s no surprise (and no coincidence) that today, in the age of the network, a new template imprints itself on everything in sight. In other words, the assumption “everything is a network” gradually falls apart into a kind of tautology of presentism. “Everything right now is a network…because everything right now has been already defined as a network.”

    This leads to the problem of ideology. Again we’re faced with an existential challenge, because network technologies were largely invented as a non-ideological or extra-ideological structure. When writing Protocol I interviewed some of the computer scientists responsible for the basic internet protocols and most of them reported that they “have no ideology” when designing networks, that they are merely interested in “code that works” and “systems that are efficient and robust.” In sociology and philosophy of science, figures like Bruno Latour routinely describe their work as “post-critical,” merely focused on the direct mechanisms of network organization. Hence ideology as a problem to be forgotten or subsumed: networks are specifically conceived and designed as those things that both are non-ideological in their conception (we just want to “get things done”), but also post-ideological in their architecture (in that they acknowledge and co-opt the very terms of previous ideological debates, things like heterogeneity, difference, agency, and subject formation).

    The problem of the event indicates a crisis for the very concept of events themselves. Here the work of Alain Badiou is invaluable. Network architectures are the perfect instantiation of what Badiou derisively labels “democratic materialism,” that is, a world in which there are “only bodies and languages.” In Badiou’s terms, if networks are the natural state of the situation and there is no way to deviate from nature, then there is no event, and hence no possibility for truth. Networks appear, then, as the consummate “being without event.”

    What could be worse? If networks are designed to accommodate massive levels of contingency — as with the famous Robustness Principle — then they are also exceptionally adept at warding off “uncontrollable” change wherever it might arise. If everything is a network, then there’s no escape, there’s no possibility for the event.

    Jameson writes as much in The Seeds of Time when he says that it is easier to imagine the end of the earth and the end of nature than it is to imagine the ends of capitalism. Network pessimism, in other words, is really a kind of network defeatism in that it makes networks the alpha and omega of our world. It’s easier to imagine the end of that world than it is to discard the network metaphor and imagine a kind of non-world in which networks are no longer dominant.

    In sum, we shouldn’t give in to network pessimism. We shouldn’t subscribe to the strong claim that everything is a network. (Nor should we subscribe to the softer claim, that networks are merely the most common, popular, or natural architecture for today’s world.) Further, we shouldn’t think that networks are the best response to networks. Instead we must ask the hard questions. What is the political fate of networks? Did heterogeneity and systematicity survive the Twentieth Century? If so, at what cost? What would a non-net look like? And does thinking have a future without the network as guide?

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay
    _____

    Notes

    [1] Anna Conlan, “Representing Possibility: Mourning, Memorial, and Queer Museology,” in Gender, Sexuality and Museums, ed. Amy K. Levin (London: Routledge, 2010). 253-263.

  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography
  • The Reticular Fallacy

    The Reticular Fallacy

    By Alexander R. Galloway
    ~

    We live in an age of heterogenous anarchism. Contingency is king. Fluidity and flux win over solidity and stasis. Becoming has replaced being. Rhizomes are better than trees. To be political today, one must laud horizontality. Anti-essentialism and anti-foundationalism are the order of the day. Call it “vulgar ’68-ism.” The principles of social upheaval, so associated with the new social movements in and around 1968, have succeed in becoming the very bedrock of society at the new millennium.

    But there’s a flaw in this narrative, or at least a part of the story that strategically remains untold. The “reticular fallacy” can be broken down into two key assumptions. The first is an assumption about the nature of sovereignty and power. The second is an assumption about history and historical change. Consider them both in turn.

    (1) First, under the reticular fallacy, sovereignty and power are defined in terms of verticality, centralization, essence, foundation, or rigid creeds of whatever kind (viz. dogma, be it sacred or secular). Thus the sovereign is the one who is centralized, who stands at the top of a vertical order of command, who rests on an essentialist ideology in order to retain command, who asserts, dogmatically, unchangeable facts about his own essence and the essence of nature. This is the model of kings and queens, but also egos and individuals. It is what Barthes means by author in his influential essay “Death of the Author,” or Foucault in his “What is an Author?” This is the model of the Prince, so often invoked in political theory, or the Father invoked in psycho-analytic theory. In Derrida, the model appears as logos, that is, the special way or order of word, speech, and reason. Likewise, arkhe: a term that means both beginning and command. The arkhe is the thing that begins, and in so doing issues an order or command to guide whatever issues from such a beginning. Or as Rancière so succinctly put it in his Hatred of Democracy, the arkhe is both “commandment and commencement.” These are some of the many aspects of sovereignty and power as defined in terms of verticality, centralization, essence, and foundation.

    (2) The second assumption of the reticular fallacy is that, given the elimination of such dogmatic verticality, there will follow an elimination of sovereignty as such. In other words, if the aforementioned sovereign power should crumble or fall, for whatever reason, the very nature of command and organization will also vanish. Under this second assumption, the structure of sovereignty and the structure of organization become coterminous, superimposed in such a way that the shape of organization assumes the identical shape of sovereignty. Sovereign power is vertical, hence organization is vertical; sovereign power is centralized, hence organization is centralized; sovereign power is essentialist, hence organization, and so on. Here we see the claims of, let’s call it, “naïve” anarchism (the non-arkhe, or non foundation), which assumes that repressive force lies in the hands of the bosses, the rulers, or the hierarchy per se, and thus after the elimination of such hierarchy, life will revert so a more direct form of social interaction. (I say this not to smear anarchism in general, and will often wish to defend a form of anarcho-syndicalism.) At the same time, consider the case of bourgeois liberalism, which asserts the rule of law and constitutional right as a way to mitigate the excesses of both royal fiat and popular caprice.

    reticular connective tissue
    source: imgkid.com

    We name this the “reticular” fallacy because, during the late Twentieth Century and accelerating at the turn of the millennium with new media technologies, the chief agent driving the kind of historical change described in the above two assumptions was the network or rhizome, the structure of horizontal distribution described so well in Deleuze and Guattari. The change is evident in many different corners of society and culture. Consider mass media: the uni-directional broadcast media of the 1920s or ’30s gradually gave way to multi-directional distributed media of the 1990s. Or consider the mode of production, and the shift from a Fordist model rooted in massification, centralization, and standardization, to a post-Fordist model reliant more on horizontality, distribution, and heterogeneous customization. Consider even the changes in theories of the subject, shifting as they have from a more essentialist model of the integral ego, however fraught by the volatility of the unconscious, to an anti-essentialist model of the distributed subject, be it postmodernism’s “schizophrenic” subject or the kind of networked brain described by today’s most advanced medical researchers.

    Why is this a fallacy? What is wrong about the above scenario? The problem isn’t so much with the historical narrative. The problem lies in an unwillingness to derive an alternative form of sovereignty appropriate for the new rhizomatic societies. Opponents of the reticular fallacy claim, in other words, that horizontality, distributed networks, anti-essentialism, etc., have their own forms of organization and control, and indeed should be analyzed accordingly. In the past I’ve used the concept of “protocol” to describe such a scenario as it exists in digital media infrastructure. Others have used different concepts to describe it in different contexts. On the whole, though, opponents of the reticular fallacy have not effectively made their case, myself included. The notion that rhizomatic structures are corrosive of power and sovereignty is still the dominant narrative today, evident across both popular and academic discourses. From talk of the “Twitter revolution” during the Arab Spring, to the ideologies of “disruption” and “flexibility” common in corporate management speak, to the putative egalitarianism of blog-based journalism, to the growing popularity of the Deleuzian and Latourian schools in philosophy and theory: all of these reveal the contemporary assumption that networks are somehow different from sovereignty, organization, and control.

    To summarize, the reticular fallacy refers to the following argument: since power and organization are defined in terms of verticality, centralization, essence, and foundation, the elimination of such things will prompt a general mollification if not elimination of power and organization as such. Such an argument is false because it doesn’t take into account the fact that power and organization may inhabit any number of structural forms. Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.

    Consider the kind of methods and concepts still popular in critical theory today: contingency, heterogeneity, anti-essentialism, anti-foundationalism, anarchism, chaos, plasticity, flux, fluidity, horizontality, flexibility. Such concepts are often praised and deployed in theories of the subject, analyses of society and culture, even descriptions of ontology and metaphysics. The reticular fallacy does not invalidate such concepts. But it does put them in question. We can not assume that such concepts are merely descriptive or neutrally empirical. Given the way in which horizontality, flexibility, and contingency are sewn into the mode of production, such “descriptive” claims are at best mirrors of the economic infrastructure and at worse ideologically suspect. At the same time, we can not simply assume that such concepts are, by nature, politically or ethically desirable in themselves. Rather, we ought to reverse the line of inquiry. The many qualities of rhizomatic systems should be understood not as the pure and innocent laws of a newer and more just society, but as the basic tendencies and conventional rules of protocological control.


    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here earlier in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Program and Be Programmed

    Program and Be Programmed

    Programmed Visions: Software and Memory (MIT Press, 2013)a review of Wendy Chun, Programmed Visions: Software and Memory (MIT Press, 2013)
    by Zachary Loeb
    ~

    Type a letter on a keyboard and the letter appears on the screen, double-click on a program’s icon and it opens, use the mouse in an art program to draw a line and it appears. Yet knowing how to make a program work is not the same as knowing how or why it works. Even a level of skill approaching mastery of a complicated program does not necessarily mean that the user understands how the software works at a programmatic level. This is captured in the canonical distinctions between users and “power users,” on the one hand, and between users and programmers on the other. Whether being a power user or being a programmer gives one meaningful power over machines themselves should be a more open question than injunctions like Douglas Rushkoff’s “program or be programmed” or the general opinion that every child must learn to code appear to allow.

    Sophisticated computer programs give users a fantastical set of abilities and possibilities. But to what extent does this sense of empowerment depend on faith in the unseen and even unknown codes at work in a given program? We press a key on a keyboard and a letter appears on the screen—but do we really know why? These are some of the questions that Wendy Hui Kyong Chun poses in Programmed Visions: Software and Memory, which provides a useful history of early computing alongside a careful analysis of the ways in which computers are used—and use their users—today. Central to Chun’s analysis is her insistence “that a rigorous engagement with software makes new media studies more, rather than less, vapory” (21), and her book succeeds admirably in this regard.

    The central point of Chun’s argument is that computers (and media in general) rely upon a notion of programmability that has become part of the underlying societal logic of neoliberal capitalism. In a society where computers are tied ever more closely to power, Chun argues that canny manipulation of software restores a sense of control or sovereignty to individual users, even as their very reliance upon this software constitutes a type of disempowerment. Computers are the driving force and grounding metaphor behind an ideology that seeks to determine the future—a future that “can be bought and sold” and which “depends on programmable visions that extrapolate the future—or more precisely, a future—based on the past” (9).

    Yet, one of the pleasures of contemporary computer usage, is that one need not fully understand much of what is going on to be able to enjoy the benefits of the computer. Though we may use computer technology to answer critical questions, this does not necessarily mean we are asking critical questions about computer technology. As Chun explains, echoing Michel Foucault, “software, free or not, is embodied and participates in structures of knowledge-power” (21); users become tangled in these structures once they start using a given device or program. Much of this “knowledge-power” is bound up in the layers of code which make software function, the code is that which gives the machine the directions—that which ensures that the tapping of the letter “r” on the keyboard leads to that letter appearing on the screen. Nevertheless, this code typically goes unseen, especially as it becomes source code, and winds up being buried ever deeper, even though this source code is what “embodies the power of the executive, the power of enforcement” (27). Importantly, the ability to write code, the programmer’s skill, does not in and of itself provide systematic power: computers follow “a set of rules that programmers must follow” (28). A sense of power over certain aspects of a computer is still incumbent upon submitting to the control of other elements of the computer.

    Contemporary computers, and our many computer-esque devices (such as smart phones and tablets), are the primary sites in which most of us encounter the codes and programming about which Chun writes, but she takes lengths to introduce the reader to the history of programming. For it is against the historical backdrop of military research, during the Second World War, that one can clearly see the ways in which notions of control, the unquestioning following of orders, and hierarchies have long been at work within computation and programming. Beyond providing an enlightening aside into the vital role that women played in programming history, analyzing the early history of computing demonstrates how as a means of cutting down on repetitive work structured programming emerged that “limits the logical procedures coders can use, and insists that the program consist of small modular units, which can be called from the main program” (36). Gradually this emphasis on structured programming allows for more and more processes to be left to the machine, and thus processes and codes become hidden from view even as future programmers are taught to conform to the demands that will allow for new programs to successfully make use of these early programs. Therefore the processes that were once a result of expertise come to be assumed aspects of the software—they become automated—and it is this very automation (“automatic programming”) that “allows the production of computer-enabled human-readable code” (41).

    As the codes and programs become hidden by ever more layers of abstraction, the computer simultaneously and paradoxically appears to make more of itself visible (through graphic user interfaces, for example), while the code itself recedes ever further into the background. This transition is central to the computer’s rapid expansion into ever more societal spheres, and it is an expansion that Chun links to the influence of neoliberal ideology. The computer with its easy-to-use interfaces creates users who feel as though they are free and empowered to manipulate the machine even as they rely on the codes and programs that they do not see. Freedom to act becomes couched in code that predetermines the range and type of actions that the users are actually free to take. What transpires, as Chun writes, is that “interfaces and operating systems produce ‘users’—one and all” (67).

    Without fully comprehending the codes that lead from a given action (a user presses a button) to a given result, the user is positioned to believe ever more in the power of the software/hardware hybrid, especially as increased storage capabilities allow for computers to access vast informational troves. In so doing, the technologically-empowered user has been conditioned to expect a programmable world akin to the programmed devices they use to navigate that world—it has “fostered our belief in the world as neoliberal: as an economic game that follows certain rules” (92). And this takes place whether or not we understand who wrote those rules, or how they can be altered.

    This logic of programmability may be linked to inorganic machines, but Chun also demonstrates the ways in which this logic has been applied to the organic world as well. In truth, the idea that the organic can be programmed predates the computer; as Chun explains “breeding encapsulates an early logic of programmability… Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content… but also at the level of operationality” (124). In considering the idea that the organic can be programmed, what emerges is a sense of the way that programming has long been associated with a certain will to exert control over things be they organic or inorganic. Far from being a digression, Chun’s discussion of eugenics provides for a fascinating historic comparison given the way in which its decline in acceptance seems to dovetail with the steady ascendance of the programmable machine.

    The intersection of software and memory (or “software as memory”) is an essential matter to consider given the informational explosion that has occurred with the spread of computers. Yet, as Chun writes eloquently: “information is ‘undead’; neither alive nor dead, neither quite present nor absent” (134), since computers simultaneously promise to make ever more information available while making the future of much of this information precarious (insofar as access may rely upon software and hardware that no longer functions). Chun elucidates the ways in which the shift from analog to digital has permitted a wider number of users to enjoy the benefits of computers while this shift has likewise made much that goes on inside a computer (software and hardware) less transparent. While the machine’s memory may seem ephemeral and (to humans) illegible, accessing information in “storage” involves codes that read by re-writing elsewhere. This “battle of diligence between the passing and the repetitive” characterizing machine memory, Chun argues, “also characterizes content today” (170). Users rely upon a belief that the information they seek will be available and that they will be able to call upon it with a few simple actions, even though they do not see (and usually cannot see) the processes that make this information present and which do or do not allow it to be presented.

    When people make use of computers today they find themselves looking—quite literally—at what the software presents to them, yet in allowing this act of seeing the programming also has determined much of what the user does not see. Programmed Visions is an argument for recognizing that sometimes the power structures that most shape our lives go unseen—even if we are staring right at them.

    * * *

    With Programmed Visions, Chun has crafted a nuanced, insightful, and dense, if highly readable, contribution to discussions about technology, media, and the digital humanities. It is a book that demonstrates Chun’s impressive command of a variety of topics and the way in which she can engagingly shift from history to philosophy to explanations of a more technical sort. Throughout the book Chun deftly draws upon a range of classic and contemporary thinkers, whilst raising and framing new questions and lines of inquiry even as she seeks to provide answers on many other topics.

    Though peppered with many wonderful turns of phrase, Programmed Visions remains a challenging book. While all readers of Programmed Visions will come to it with their own background and knowledge of coding, programming, software, and so forth—the simple truth is that Chun’s point (that many people do not understand software sufficiently) may make many a reader feel somewhat taken aback. For most computer users—even many programmers and many whose research involves the study of technology and media—are quite complicit in the situation that Chun describes. It is the sort of discomforting confrontation that is valuable precisely because of the anxiety it provokes. Most users take for granted that the software will work the way they expect it to—hence the frustration bordering on fury that many people experience when suddenly the machine does something other than that which is expected provoking a maddened outburst of “why aren’t you working!” What Chun helps demonstrate is that it is not so much that the machines betray us, but that we were mistaken in our thinking that machines ever really obeyed us.

    It will be easy for many readers to see themselves as the user that Chun describes—as someone positioned to feel empowered by the devices they use, even as that power depends upon faith in forces the user cannot see, understand, or control. Even power users and programmers, on careful self-reflection may identify with Chun’s relocation of the programmer from a position of authority to a role wherein they too must comply with the strictures of the code presents an important argument for considerations of such labor. Furthermore, the way in which Chun links the power of the machine to the overarching ideology of neoliberalism makes her argument useful for discussions broader than those in media studies and the digital humanities. What makes these arguments particularly interesting is the way in which Chun locates them within thinking about software. As she writes towards the end of the second chapter, “this chapter is not a call to return to an age when one could see and comprehend the actions of our computers. Those days are long gone… Neither is this chapter an indictment of software or programming… It is, however, an argument against common-sense notions of software precisely because of their status as common sense” (92). Such a statement refuses to provide the anxious reader (who has come to see themselves as an uninformed user) with a clear answer, for it suggests that the “common-sense” clear answer is part of what has disempowered them.

    The weaving of historic details regarding computers during World War II and eugenics provide an excellent and challenging atmosphere against which Chun’s arguments regarding programmability can grow. Chun lucidly describes the embodiment and materiality of information and obsolescence that serve as major challenges confronting those who seek to manage and understand the massive informational flux that computer technology has enabled. The idea of information as “undead” is both amusing and evocative as it provides for a rich way of describing the “there but not there” of information, while simultaneously playing upon the slight horror and uneasiness that seems to be lurking below the surface in the confrontation with information.

    As Chun sets herself the difficult task of exploring many areas, there are some topics where the reader may be left wanting more. The section on eugenics presents a troubling and fascinating argument—one which could likely have been a book in and of itself—especially when considered in the context of arguments about cyborg selves and post-humanity, and it is a section that almost seems to have been cut short. Likewise the discussion of race (“a thread that has been largely invisible yet central,” 179), which is brought to the fore in the epilogue, confronts the reader with something that seems like it could in fact be the introduction for another book. It leaves the reader with much to contemplate—though it is the fact that this thread was not truly “largely invisible” that makes the reader upon reaching the epilogue wish that the book could have dealt with that matter at greater length. Yet, these are fairly minor concerns—that Programmed Visions leaves its readers re-reading sections to process them in light of later points is a credit to the text.

    Programmed Visions: Software and Memory is an alternatively troubling, enlightening, and fascinating book. It allows its reader to look at software and hardware in a new way, with a fresh insight about this act of sight. It is a book that plants a question (or perhaps subtly programs one into the reader’s mind): what are you not seeing, what power relations remain invisible, between the moment during which the “?” is hit on the keyboard and the moment it appears on the screen?


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He has previously reviewed The People’s Platform by Astra Taylor and Social Media: A Critical Introduction by Christian Fuchs for boundary2.org.

    Back to the essay