boundary 2

Tag: anarchism

  • Jonathan Beller — The Computational Unconscious

    Jonathan Beller — The Computational Unconscious

    Jonathan Beller

    God made the sun so that animals could learn arithmetic – without the succession of days and nights, one supposes, we should not have thought of numbers. The sight of day and night, months and years, has created knowledge of number, and given us the conception of time, and hence came philosophy. This is the greatest boon we owe to sight.
    – Plato, Timaeus

    The term “computational capital” understands the rise of capitalism as the first digital culture with universalizing aspirations and capabilities, and recognizes contemporary culture, bound as it is to electronic digital computing, as something like Digital Culture 2.0. Rather than seeing this shift from Digital Culture 1.0 to Digital Culture 2.0 strictly as a break, we might consider it as one result of an overall intensification in the practices of quantification. Capitalism, says Nick Dyer-Witheford (2012), was already a digital computer and shifts in the quantity of quantities lead to shifts in qualities. If capitalism was a digital computer from the get-go, then “the invisible hand”—as the non-subjective, social summation of the individualized practices of the pursuit of private (quantitative) gain thought to result in (often unknown and unintended) public good within capitalism—is an early, if incomplete, expression of the computational unconscious. With the broadening and deepening of the imperative toward quantification and rational calculus posited then presupposed during the early modern period by the expansionist program of Capital, the process of the assignation of a number to all qualitative variables—that is, the thinking in numbers (discernible in the commodity-form itself, whereby every use-value was also an encoded as an exchange-value)—entered into our machines and our minds. This penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing, leaves no stone unturned. Today, as could be well known from everyday observation if not necessarily from media theory, computational calculus arguably underpins nearly all productive activity and, particularly significant for this argument, those activities that together constitute the command-control apparatus of the world system and which stretch from writing to image-making and, therefore, to thought.[1] The contention here is not simply that capitalism is on a continuum with modern computation, but rather that computation, though characteristic of certain forms of thought, is also the unthought of modern thought. The content indifferent calculus of computational capital ordains the material-symbolic and the psycho-social even in the absence of a conscious, subjective awareness of its operations. As the domain of the unthought that organizes thought, the computational unconscious is structured like a language, a computer language that is also and inexorably an economic calculus.

    The computational unconscious allows us to propose that much contemporary consciousness (aka “virtuosity” in post-Fordist parlance) is a computational effect—in short, a form of artificial intelligence. A large part of what “we” are has been conscripted, as thought and other allied metabolic processes are functionalized in the service of the iron clad movements of code. While “iron clad” is now a metaphor and “code” is less the factory code and more computer code, understanding that the logic of industrial machinery and the bureaucratic structures of the corporation and the state have been abstracted and absorbed by discrete state machines to the point where in some quarters “code is law” will allow us to pursue the surprising corollary that all the structural inequalities endemic to capitalist production—categories that often appear under variants of the analog signs of race, class, gender, sexuality, nation, etc., are also deposited and thus operationally disappeared into our machines.

    Put simply, and, in deference to contemporary attention spans, too soon, our machines are racial formations. They are also technologies of gender and sexuality.[2] Computational capital is thus also racial capitalism, the longue durée digitization of racialization and, not in any way incidentally, of regimes of gender and sexuality. In other words inequality and structural violence inherent in capitalism also inhere in the logistics of computation and consequently in the real-time organization of semiosis, which is to say, our practices and our thought. The servility of consciousness, remunerated or not, aware of its underlying operating system or not, is organized in relation not just to sociality understood as interpersonal interaction, but to digital logics of capitalization and machine-technics. For this reason, the political analysis of postmodern and, indeed, posthuman inequality must examine the materiality of the computational unconscious. That, at least, is the hypothesis, for if it is the function of computers to automate thinking, and if dominant thought is the thought of domination, then what exactly has been automated?

    Already in the 1850s the worker appeared to Marx as a “conscious organ” in the “vast automaton” of the industrial machine, and by the time he wrote the first volume of Capital Marx was able to comment on the worker’s new labor of “watching the machine with his eyes and correcting its mistakes with his hands” (Marx 1867: 496, 502). Marx’s prescient observation with respect to the emergent role of visuality in capitalist production, along with his understanding that the operation of industrial machinery posits and presupposes the operation of other industrial machinery, suggests what was already implicit if not fully generalized in the analysis: that Dr. Ure’s notion, cited by Marx, of the machine as a “vast automaton,” was scalable—smaller machines, larger machines, entire factories could be thus conceived, and with the increasing scale and ubiquity of industrial machines, the notion could well describe the industrial complex as a whole. Historically considered, “watching the machine with his eyes and correcting the mistakes with his hands” thus appears as an early description of what information workers such as you and I do on our screens. To extrapolate: distributed computation and its integration with industrial process and the totality of social processes suggest that not only has society as a whole become a vast automaton profiting from the metabolism of its conscious organs, but further that the confrontation or interface with the machine at the local level (“where we are”) is an isolated and phenomenal experience that is not equivalent to the perspective of the automaton or, under capitalism, that of Capital. Given that here, while we might still be speaking about intelligence, we are not necessarily speaking about subjects in the strict sense, we might replace Althusser’s relation of S-s—Big Subject (God, the State, etc) to small subject (“you” who are interpellated with and in ideology)—with AI-ai— Big Artificial Intelligence (the world system as organized by computational capital) and “you” Little Artificial Intelligence (as organized by the same). Here subjugation is not necessarily intersubjective, and does not require recognition. The AI does not speak your language even if it is your operating system. With this in mind we may at once understand that the space-time regimes of subjectivity (point-perspective, linear time, realism, individuality, discourse function, etc.) that once were part of the digital armature of “the human,” have been profitably shattered, and that the fragments have been multiplied and redeployed under the requisites of new management. We might wager that these outmoded templates or protocols may still  also meaningfully refer to a register of meaning and conceptualization that can take the measure of historical change, if only for some kind of species remainder whose value is simultaneously immeasurable, unknown and hanging in the balance.

    Ironically perhaps, given the progress narratives attached to technical advances and the attendant advances in capital accumulation, Marx’s hypothesis in Capital Chapter 15, “Machinery and Large-Scale Industry,” that “it would be possible to write a whole history of the inventions made since 1830 for the purpose of providing capital with weapons against working class revolt” (1867, 563), casts an interesting light on the history of computing and its creation-imposition of new protocols. Not only have the incredible innovations of workers been abstracted and absorbed by machinery, but so also have their myriad antagonisms toward capitalist domination. Machinic perfection meant the imposition of continuity and the removal of “the hand of man” by fixed capital, in other words, both the absorption of know-how and the foreclosure of forms of disruption via automation (Marx 1867, 502).

    Dialectically understood, subjectivity, while a force of subjugation in some respects, also had its own arsenal of anti-capitalist sensibilities. As a way of talking about non-conformity, anti-sociality and the high price of conformity and its discontents, the unconscious still has its uses, despite its unavoidable and perhaps nostalgic invocation of a future that has itself been foreclosed. The conscious organ does not entirely grasp the cybernetic organism of which it is a part; nor does it fully grasp the rationale of its subjugation. If the unconscious was machinic, it is now computational, and if it is computational it is also locked in a struggle with capitalism. If what underlies perceptual and cognitive experience is the automaton, the vast AI, what I will be referring to as The Computer, which is the totalizing integration of global practice through informatic processes, then from the standpoint of production we constitute its unconscious. However, as we are ourselves unaware of our own constitution, the Unconscious of producers is their/our specific relation to what Paolo Virno acerbically calls, in what can only be a lamentation of history’s perverse irony, “the communism of capital” (2004, 110). If the revolution killed its father (Marx) and married its mother (Capitalism), it may be worth considering the revolutionary prospects of an analysis of this unconscious.

    Introduction: The Computational Unconscious

    Beginning with the insight that the rise of capitalism marks the onset of the first universalizing digital culture, this essay, and the book of which it is chapter one, develops the insights of The Cinematic Mode of Production (Beller 2006) in an effort to render the violent digital subsumption by computational racial capital that the (former) “humans” and their (excluded) ilk are collectively undergoing in a manner generative of sites of counter-power—of, let me just say it without explaining it, derivatives of counter-power, or, Derivative Communism. To this end, the following section offers a reformulation of Marx’s formula for capital, Money-Commodity-Money’ (M-C-M’), that accounts for distributed production in the social factory, and by doing so hopes to direct attention to zones where capitalist valorization might be prevented or refused. Prevented or refused not only to break a system which itself functions by breaking the bonds of solidarity and mutual trust that formerly were among the conditions that made a life worth living, but also to posit the redistribution of our own power towards ends that for me are still best described by the word communist (or perhaps meta-communist but that too is for another time). This thinking, political in intention, speculative in execution and concrete in its engagement, also proposes a revaluation of the aesthetic as an interface that sensualizes information. As such, the aesthetic is both programmed, and programming—a privileged site (and indeed mode) of confrontation in the digital apartheid of the contemporary.

    Along these lines, and similar to the analysis pursued in The Cinematic Mode of Production, I endeavor to de-fetishize a platform—computation itself—one that can only be properly understood when grasped as a means of production embedded in the bios. While computation is often thought of as being the thing accomplished by hardware churning through a program (the programmatic quantum movements of a discrete state machine), it is important to recognize that the universal Turing machine was (and remains) media indifferent only in theory and is thus justly conceived of as an abstract machine in the realm of ideas and indeed of the ruling ideas. However, it is an abstract machine that, like all abstractions, evolves out of concrete circumstances and practices; which is to say that the universal Turing Machine is itself an abstraction subject to historical-materialist critique. Furthermore, Turing Machines iterate themselves on the living, on life, reorganizing its practices. One might situate the emergence and function of the universal Turing machine as perhaps among the most important abstract machines in the last century, save perhaps that of capital itself. However, both their ranking and even their separability is here what we seek to put into question.

    Without a doubt, the computational process, like the capitalist process, has a corrosive effect on ontological precepts, accomplishing a far-reaching liquidation of tradition that includes metaphysical assumptions regarding the character of essence, being, authenticity and presence. And without a doubt, computation has been built even as it has been discovered. The paradigm of computation marks an inflection point in human history that reaches along temporal and spatial axes: both into the future and back into the past, out to the cosmos and into the sub-atomic. At any known scale, from plank time (10^-44 seconds) to yottaseconds (10^24 seconds), and from 10^-35 to 10^27 meters, computation, conceptualization and sense-making (sensation) have become inseparable. Computation is part of the historicity of the senses. Just ask that baby using an iPad.

    The slight displacement of the ontology of computation implicit in saying that it has been built as much as discovered (that computation has a history even if it now puts history itself at risk) allows us to glimpse, if only from what Laura Mulvey calls “the half-light of the imaginary” (1975, 7)—the general antagonism is feminized when the apparatus of capitalization has overcome the symbolic—that computation is not, so far as we can know, the way of the universe per se, but rather the way of the universe as it has become intelligible to us vis-à-vis our machines. The understanding, from a standpoint recognized as science, that computation has fully colonized the knowable cosmos (and is indeed one with knowing) is a humbling insight, significant in that it allows us to propose that seeing the universe as computation, as, in short, simulable, if not itself a simulation (the computational effect of an informatic universe), may be no more than the old anthropocentrism now automated by apparatuses. We see what we can see with the senses we have—autopoesis. The universe as it appears to us is figured by—that is, it is a figuration of—computation. That’s what our computers tell us. We build machines that discern that the universe functions in accord with their self-same logic. The recursivity effects the God trick.

    Parametrically translating this account of cosmic emergence into the domain of history, reveals a disturbing allegiance of computational consciousness organized by the computational unconscious, to what Silvia Federici calls the system of global apartheid. Historicizing computational emergence pits its colonial logic directly against what Fred Moten and Stefano Harney identify as “the general antagonism” (2013, 10) (itself the reparative antithesis, or better perhaps the reverse subsumption of the general intellect as subsumed by capital). The procedural universalization of computation is a cosmology that attributes and indeed enforces a sovereignty tantamount to divinity and externalities be damned. Dissident, fugitive planning and black study – a studied refusal of optimization, a refusal of computational colonialism — may offer a way out of the current geo-(post-)political and its computational orthodoxy.

    Computational Idolatry and Multiversality

    In the new idolatry cathetcted to inexorable computational emergence, the universe is itself currently imagined as a computer. Here’s the seductive sound of the current theology from a conference sponsored by the sovereign state of NYU:

    As computers become progressively faster and more powerful, they’ve gained the impressive capacity to simulate increasingly realistic environments. Which raises a question familiar to aficionados of The Matrix—might life and the world as we know it be a simulation on a super advanced computer? “Digital physicists” have developed this idea well beyond the sci-fi possibilities, suggesting a new scientific paradigm in which computation is not just a tool for approximating reality but is also the basis of reality itself. In place of elementary particles, think bits; in place of fundamental laws of physics, think computer algorithms. (Scientific American 2011)

    Science fiction, in the form of “the Matrix,” is here used to figure a “reality” organized by simulation, but then this reality is quickly dismissed as something science has moved well beyond. However, it would not be illogical here to propose that “reality” is itself a science fiction—a fiction whose current author is no longer the novel or Hollywood but science. It is in a way no surprise that, consistent with “digital physics,” MIT physicist, Max Tegmark, claims that consciousness is a state of matter: Consciousness as a phenomenon of information storage and retrieval, is a property of matter described by the term “computronium.” Humans represent a rather low level of complexity. In the neo-Hegelian narrative in which the philosopher—scientist reveals the working out of world—or, rather, cosmic—spirit, one might say that it is as science fiction—one of the persistent fictions licensed by science—that “reality itself” exists at all. We should emphasize that the trouble here is not so much with “reality,” the trouble here is with “itself.” To the extent that we recognize that poesis (making) has been extended to our machines and it is through our machines that we think and perceive, we may recognize that reality is itself a product of their operations. The world begins to look very much like the tools we use to perceive it to the point that Reality itself is thus a simulation, as are we—a conclusion that concurs with the notion of a computational universe, but that seems to (conveniently) elide the immediate (colonial) history of its emergence. The emergence of the tools of perception is taken as universal, or, in the language of a quantum astrophysics that posits four levels of multiverses: multiversal. In brief, the total enclosure by computation of observer and observed is either reality itself becoming self-aware, or tautological, waxing ideological, liquidating as it does historical agency by means of the suddenly a priori stochastic processes of cosmic automation.

    Well! If total cosmic automation, then no mistakes, so we may as well take our time-bound chances and wager on fugitive negation in the precise form of a rejection of informatic totalitarianism. Let us sound the sedimented dead labor inherent in the world-system, its emergent computational armature and its iconic self-representations. Let us not forget that those machines are made out of embodied participation in capitalist digitization, no matter how disappeared those bodies may now seem. Marx says, “Consciousness is… from the very beginning a social product and remains so for as long as men exist at all” (Tucker 1978, 178). The inescapable sociality and historicity of knowledge, in short, its political ontology, follows from this—at least so long as humans “exist at all.”

    The notion of a computational cosmos, though not universally or even widely consented to by scientific consciousness, suggests that we respire in an aporiatic space—in the null set (itself a sign) found precisely at the intersection of a conclusion reached by Gödel in mathematics (Hofstadter 1979)—that there is no sufficiently powerful logical system that is internally closed such that logical statements cannot be formulated that can neither be proved nor disproved—and a different conclusion reached by Maturana and Varela (1992), and also Niklas Luhmann (1989), that a system’s self-knowing, its autopoesis, knows no outside; it can know only in its own terms and thus knows only itself. In Gödel’s view, systems are ineluctably open, there is no closure, complete self-knowledge is impossible and thus there is always an outside or a beyond, while in the latter group’s view, our philosophy, our politics and apparently our fate is wedded to a system that can know no outside since it may only render an outside in its own terms, unless, or perhaps, even if/as that encounter is catastrophic.

    Let’s observe the following: 1) there must be an outside or a beyond (Gödel); 2) we cannot know it (Maturana and Varela); 3) and yet…. In short, we don’t know ourselves and all we know is ourselves. One way out of this aporia is to say that we cannot know the outside and remain what we are. Enter history: Multiversal Cosmic Knoweldge, circa 2017, despite its awesome power, turns out to be pretty local. If we embrace the two admittedly humbling insights regarding epistemic limits—on the one hand, that even at the limits of computationally—informed knowledge (our autopoesis) all we can know is ourselves, along with Gödel’s insight that any “ourselves” whatsoever that is identified with what we can know is systemically excluded from being All—then it as axiomatic that nothing (in all valences of that term) fully escapes computation—for us. Nothing is excluded from what we can know except that which is beyond the horizon of our knowledge, which for us is precisely nothing. This is tantamount to saying that rational epistemology is no longer fully separable from the history of computing—at least for any us who are, willingly or not, participant in contemporary abstraction. I am going to skip a rather lengthy digression about fugitive nothing as precisely that bivalent point of inflection that escapes the computational models of consciousness and the cosmos, and just offer its conclusion as the next step in my discussion: We may think we think—algorithmically, computationally, autonomously, or howsoever—but the historically materialized digital infrastructure of the socius thinks in and through us as well. Or, as Marx put it, “The real subject remains outside the mind and independent of it—that is to say, so long as the mind adopts a purely speculative, purely theoretical attitude. Hence the subject, society, must always be envisaged as the premises of conception even when the theoretical method is employed” (Marx: vol. 28, 38-39).[3]

    This “subject, society” in Marx’s terms, is present even in its purported absence—it is inextricable from and indeed overdetermines theory and, thus, thought: in other words, language, narrative, textuality, ideology, digitality, cosmic consciousness. This absent structure informs Althusser’s Lacanian-Marxist analysis of Ideology (and of “the ideology of no ideology,” 1977) as the ideological moment par excellance: an analog way of saying “reality” is simulation) as well as his beguiling (because at once necessary and self-negating) possibility of a subjectless scientific discourse. This non-narrative, unsymbolizeable absent structure akin to the Lacanian “Real” also informs Jameson’s concept of the political unconscious as the black-boxed formal processor of said absent structure, indicated in his work by the term “History” with a capital “H” (1981).  We will take up Althusser and Jameson in due time (but not in this paper). For now, however, for the purposes of our mediological investigation, it is important to pursue the thought that precisely this functional overdetermination, which already informed Marx’s analysis of the historicity of the senses in the 1844 manuscripts, extends into the development of the senses and the psyche. As Jameson put it in The Political Unconscious thirty-five years ago: “That the structure of the psyche is historical and has a history, is… as difficult for us to grasp as that the senses are not themselves natural organs but rather the result of a long process of differentiation even within human history”(1981, 62).

    The evidence for the accuracy of this claim, built from Marx’s notion that “the forming of the five senses requires the history of the world down to the present” has been increasing. There is a host of work on the inseparability of technics and the so-called human (from Mauss to Simondon, Deleuze and Guattari, and Bernard Stiegler) that increasingly makes it possible to understand and even believe that the human, along with consciousness, the psyche, the senses and, consequently, the unconscious are historical formations. My own essay “The Unconscious of the Unconscious” from The Cinematic Mode of Production traces Lacan’s use of “montage,” “the cut,” the gap, objet a, photography and other optical tropes and argues (a bit too insistently perhaps) that the unconscious of the unconscious is cinema, and that a scrambling of linguistic functions by the intensifying instrumental circulation of ambient images (images that I now understand as derivatives of a larger calculus) instantiates the presumably organic but actually equally technical cinematic black box known as the unconscious.[iv] Psychoanalysis is the institutionalization of a managerial technique for emergent linguistic dysfunction (think literary modernism) precipitated by the onslaught of the visible.

    More recently, and in a way that suggests that the computational aspects of historical materialist critique are not as distant from the Lacanian Real as one might think, Lydia Liu’s The Freudian Robot (2010) shows convincingly that Lacan modeled the theory of the unconscious from information theory and cybernetic theory. Liu understands that Lacan’s emphasis on the importance of structure and the compulsion to repeat is explicitly addressed to “the exigencies of chance, randomness, and stochastic processes in general” (2010, 176). She combs Lacan’s writings for evidence that they are informed by information theory and provides us with some smoking guns including the following:

    By itself, the play of the symbol represents and organizes, independently of the peculiarities of its human support, this something which is called the subject. The human subject doesn’t foment this game, he takes his place in it, and plays the role of the little pluses and minuses in it. He himself is an element in the chain which, as soon as it is unwound, organizes itself in accordance with laws. Hence the subject is always on several levels, caught up in the crisscrossing of networks. (quoted in Liu 2010, 176)

    Liu argues that “the crisscrossing of networks” alludes not so much to linguistic networks but to communication networks, and precisely references the information theory that Lacan read, particularly that of George Gilbaud, the author of What is Cybernetics?. She writes that, “For Lacan, ‘the primordial couple of plus and minus’ or the game of even and odd should precede linguistic considerations and is what enables the symbolic order.”

    “You can play heads or tails by yourself,” says Lacan, “but from the point of view of speech, you aren’t playing by yourself – there is already the articulation of three signs comprising a win or a loss and this articulation prefigures the very meaning of the result. In other words, if there is no question, there is no game, if there is no structure, there is no question. The question is constituted, organized by the structure” (quoted in Liu 2010, 179). Liu comments that “[t]his notion of symbolic structure, consistent with game theory, [has] important bearings on Lacan’s paradoxically non-linguistic view of language and the symbolic order.”

    Let us not distract ourselves here with the question of whether or not game theory and statistical analysis represent discovery or invention. Heisenberg, Schrödinger, and information theory formalized the statistical basis that one way or another became a global (if not also multiversal) episteme. Norbert Wiener, another father, this time of cybernetics, defined statistics as “the science of distribution” (Weiner 1989, 8). We should pause here to reflect that, given that cybernetic research in the West was driven by military and, later, industrial applications, that is, applications deemed essential for the development of capitalism and the capitalist way of life, such a statement calls for a properly dialectical analysis. Distribution is inseparable from production under capitalism, and statistics is the science of this distribution. Indeed, we would want to make such a thesis resonate with the analysis of logistics recently undertaken by Moten and Harney and, following them, link the analysis of instrumental distribution to the Middle Passage, as the signal early modern consequence of the convergence of rationalization and containerization—precisely the “science” of distribution worked out in the French slave ship Adelaide or the British ship Brookes. For the moment, we underscore the historicity of the “science of distribution” and thus its historical emergence as socio-symbolic system of organization and control. Keeping this emergence clearly in mind helps us to understand that mathematical models quite literally inform the articulation of History and the unconscious—not only homologously as paradigms in intellectual history, but materially, as ways of organizing social production in all domains. Whether logistical, optical or informatic, the technics of mathematical concepts, which is to say programs, orchestrate meaning and constitute the unconscious.

    Perhaps more elusive even than this historicity of the unconscious grasped in terms of a digitally encoded matrix of materiality and epistemology that constitutes the unthought of subjective emergence, may be that the notion that the “subject, society” extends into our machines. Vilém Flusser, in Towards a Philosophy of Photography, tells us,

    Apparatuses were invented to simulate specific thought processes. Only now (following the invention of the computer), and as it were in hindsight, it is becoming clear what kind of thought processes we are dealing with in the case of all apparatuses. That is: thinking expressed in numbers. All apparatuses (not just computers) are calculating machines and in this sense “artificial intelligences,” the camera included, even if their inventors were not able to account for this. In all apparatuses (including the camera) thinking in numbers overrides linear, historical thinking. (Flusser 2000, 31)

    This process of thinking in numbers, and indeed the generalized conversion of multiple forms of thought and practice to an increasingly unified systems language of numeric processing, by capital markets, by apparatuses, by digital computers requires further investigation. And now that the edifice of computation—the fixed capital dedicated to computation that either recognizes itself as such or may be recognized as such—has achieved a consolidated sedimentation of human labor at least equivalent to that required to build a large nation (a superpower) from the ground up, we are in a position to ask in what way has capital-logic and the logic of private property, which as Marx points out is not the cause but the effect of alienated wage- (and thus quantified) labor, structured computational paradigms? In what way has that “subject, society” unconsciously structured not just thought, but machine-thought? Thinking, expressed in numbers, materialized first by means of commodities and then in apparatuses capable of automating this thought. Is computation what we’ve been up to all along without knowing it? Flusser suggests as much through his notion that 1) the camera is a black box that is a programme, and, 2) that the photograph or technical image produces a “magical” relation to the world in as much as people understand the photograph as a window rather than as information organized by concepts. This amounts to the technical image as itself a program for the bios and suggests that the world has long been unconsciously organized by computation vis-à-vis the camera. As Flusser has it, cameras have organized society in a feedback loop that works towards the perfection of cameras. If the computational processes inherent in photography are themselves an extension of capital logic’s universal digitization (an argument I made in The Cinematic Mode of Production and extended in The Message is Murder), then that calculus has been doing its work in the visual reorganization of everyday life for almost two centuries.

    Put another way, thinking expressed in numbers (the principles of optics and chemistry) materialized in machines automates thought (thinking expressed in numbers) as program. The program of say, the camera, functions as a historically produced version of what Katherine Hayles has recently called “nonconscious cognition” (Hayles 2016). Though locally perhaps no more self-aware than the sediment sorting process of a riverbed (another of Hayles’s computational examples) the camera nonetheless affects purportedly conscious beings from the domain known as the unconscious, as, to give but one shining example, feminist film theory clearly shows: The function of the camera’s program organizes the psycho-dynamics of the spectator in a way that at once structures film form through market feedback, gratifies the (white-identified) male ego and normalizes the violence of heteropatriarchy, and does so at a profit. Now that so much human time has gone into developing cameras, computer hardware and programming, such that hardware and programming are inextricable from the day to day and indeed nano-second to nano-second organization of life on planet earth (and not only in the form of cameras), we can ask, very pointedly, which aspects of computer function, from any to all, can be said to be conditioned not only by sexual difference but more generally still, by structural inequality and the logistics of racialization? Which computational functions perpetuate and enforce these historically worked up, highly ramified social differences ? Structural and now infra-structural inequalities include social injustices—what could be thought of as and in a certain sense are  algorithmic racism, sexism and homophobia, and also programmatically unequal access to the many things that sustain life, and legitimize murder (both long and short forms, executed by, for example, carceral societies, settler colonialism, police brutality and drone strikes), and catastrophes both unnatural (toxic mine-tailings, coltan wars) and purportedly natural (hurricanes, droughts, famines, ambient environmental toxicity). The urgency of such questions resulting from the near automation of geo-political emergence along with a vast conscription of agents is only exacerbated as we recognize that we are obliged to rent or otherwise pay tribute (in the form of attention, subscription, student debt) to the rentier capitalists of the infrastructure of the algorithm in order to access portions of the general intellect from its proprietors whenever we want to participate in thinking.

    For it must never be assumed that technology (even the abstract machine) is value-neutral, that it merely exists in some uninterested ideal place and is then utilized either for good or for ill by free men (it would be “men” in such a discourse). Rather, the machine, like Ariella Azoulay’s understanding of photography, has a political ontology—it is a social relation, and an ongoing one whose meaning is, as Azoulay says of the photograph, never at an end (2012, 25). Now that representation has been subsumed by machines, has become machinic (overcoded as Deleuze and Guattari would say) everything that appears, appears in and through the machine, as a machine. For the present (and as Plato already recognized by putting it at the center of the Republic), even the Sun is political. Going back to my opening, the cosmos is merely a collection of billions of suns—an infinite politics.

    But really, this political ontology of knowledge, machines, consciousness, praxis should be obvious. How could technology, which of course includes the technologies of knowledge, be anything other than social and historical, the product of social relations? How could these be other than the accumulation, objectification and sedminentation of subjectivities that are themselves an historical product? The historicity of knowledge and perception seems inescapable, if not fully intelligible, particularly now, when it is increasingly clear that it is the programmatic automation of thought itself that has been embedded in our apparatuses. The programming and overdetermination of “choice,” of options, by a rationality that was itself embedded in the interested circumstances of life and continuously “learns” vis-à-vis the feedback life provides has become ubiquitous and indeed inexorable (I dismiss “Object Oriented Ontology” and its desperate effort to erase white-boy subjectivity thusly: there are no ontological objects, only instrumental epistemic horizons). To universalize contemporary subjectivity by erasing its conditions of possibility is to naturalize history; it is therefore to depoliticize it and therefore to recapitulate its violence in the present.

    The short answer then regarding digital universality is that technology (and thus perception, thought and knowledge) can only be separated from the social and historical—that is, from racial capitalism—by eliminating both the social and historical (society and history) through its own operations. While computers, if taken as a separate constituency along with a few of their biotic avatars, and then pressed for an answer, might once have agreed with Margaret Thatcher’s view that “there is no such thing as society,” one would be hard-pressed to claim that this post-sociological (and post-Birmingham) “discovery” is a neutral result. Thatcher’s observation, that “the problem with socialism is that you eventually run out of other people’s money,” while admittedly pithy, if condescending, classist and deadly, subordinates social needs to existing property-relations and their financial calculus at the ontological level. She smugly valorizes the status quo by positing capitalism as an untranscendable horizon since the social product is by definition always already “other people’s money.” But neoliberalism has required some revisioning of late (which is a polite way of saying that fascism has needed some updating): the newish but by now firmly-established term “social media” tells us something more about the parasitic relation that the cold calculus this mathematical universe of numbers has to the bios. To preserve global digital apartheid requires social media, the process(ing) of society itself cybernetically-interfaced with the logistics of racial-capitalist computation. This relation, a means of digital expropriation aimed to profitably exploit an equally significant global aspiration towards planetary communicativity and democratization, has become the preeminent engine of capitalist growth. Society, at first seemingly negated by computation and capitalism, is now directly posited as a source of wealth, for what is now explicitly computational capital and actually computational racial capital. The attention economy, immaterial labor, neuropower, semio-capitalism: all of these terms, despite their differences, mean in effect that society, as a deterritorialized factory, is no longer disappeared as an economic object; it disappears only as a full beneficiary of the dominant economy which is now parasitical on its metabolism. The social revolution in planetary communicativity is being farmed and harvested by computational capitalism.

    Dialectics of the Human-Machine

    For biologists it has become au courant when speaking of humans to speak also of the second genome—one must consider not just the 26 chromosomes of the human genome that replicate what was thought of as the human being as an autonomous life-form, but the genetic information and epigenetic functionality of all the symbiotic bacteria and other organisms without which there are no humans. Pursuant to this thought, we might ascribe ourselves a third genome: information. No good scientist today believes that human beings are free standing forms, even if most (or really almost all) do not make the critique of humanity or even individuality through a framework that understands these categories as historically emergent interfaces of capitalist exchange. However, to avoid naturalizing the laws of capitalism as simply an expression of the higher (Hegalian) laws of energetics and informatics (in which, for example ATP can be thought to function as “capital”), this sense of “our” embeddedness in the ecosystem of the bios must be extended to that of the materiality of our historical societies, and particularly to their systems of mediation and representational practices of knowledge formation—including the operations of  textuality, visuality, data visualization and money—which, with convergence today, means precisely, computation.

    If we want to understand the emergence of computation (and of the anthropocene), we must attend to the transformations and disappearances of life forms—of forms of life in the largest sense. And we must do so in spite of the fact that the sedimentation of the history of computation would neutralize certain aspects of human aspiration and of humanity—including, ultimately, even the referent of that latter sign—by means of law, culture, walls, drones, derivatives, what have you. The biosynthetic process of computation and human being gives rise to post-humanism only to reveal that there were never any humans here in the first place: We have never been human—we know this now. “Humanity,” as a protracted example of maiconaissance—as a problem of what could be called the humanizing-machine or, better perhaps, the human-machine, is on the wane.

    Naming the human-machine, is of course a way of talking about the conquest, about colonialism, slavery, imperialism, and the racializing, sex-gender norm-enforcing regimes of the last 500 years of capitalism that created the ideological legitimation of its unprecedented violence in the so-called humanistic values it spat out. Aimé Césaire said it very clearly when he posed the scathing question in Discourse on Colonialism: “Civilization and Colonization?” (1972). “The human-machine” names precisely the mechanics of a humanism that at once resulted from and were deployed to do the work of humanizing planet Earth for the quantitative accountings of capital while at the same time divesting a large part of the planetary population of any claims to the human. Following David Golumbia, in The Cultural Logic of Computation (2009), we might look to Hobbes, automata and the component parts of the Leviathan for “human” emergence as a formation of capital. For so many, humanism was in effect more than just another name for violence, oppression, rape, enslavement and genocide—it was precisely a means to violence. “Humanity” as symptom of The Invisible Hand, AI’s avatar. Thus it is possible to see the end of humanism as a result of decolonization struggles, a kind of triumph. The colonized have outlasted the humans. But so have the capitalists.

    This is another place where recalling the dialectic is particularly useful. Enlightenment Humanism was a platform for the linear time of industrialization and the French revolution with “the human” as an operating system, a meta-ISA emerging in historical movement, one that developed a set of ontological claims which functioned in accord with the early period of capitalist digitality. The period was characterized by the institutionalization of relative equality (Cedric Robinson does not hesitate to point out that the precondition of the French Revolution was colonial slavery), privacy, property. Not only were its achievements and horrors inseparable the imposition of logics of numerical equivalence, they were powered by the labor of the peoples of Earth, by the labor-power of disparate peoples, imported as sugar and spices, stolen as slaves, music and art, owned as objective wealth in the form of lands, armies, edifices and capital, and owned again as subjective wealth in the form of cultural refinement, aesthetic sensibility, bourgeois interiority—in short, colonial labor, enclosed by accountants and the whip, was expatriated as profit, while industrial labor, also expropriated, was itself sustained by these endeavors. The accumulation of the wealth of the world and of self-possession for some was organized and legitimated by humanism, even as those worlded by the growth of this wealth struggled passionately, desultorily, existentially, partially and at times absolutely against its oppressive powers of objectification and quantification. Humanism was colonial software, and the colonized were the outsourced content providers—the first content providers—recruited to support the platform of so-called universal man. This platform humanism is not so much a metaphor; rather it is the tendency that is unveiled by the present platform post-humanism of computational racial capital. The anatomy of man is the key to the anatomy of the ape, as Marx so eloquently put the telos of man. Is the anatomy of computation the key to the anatomy of “man”?

    So the end of humanism, which in a narrow (white, Euro-American, technocratic) view seems to arrive as a result of the rise of cyber-technologies, must also be seen as having been long willed and indeed brought about by the decolonizing struggles against humanism’s self-contradictory and, from the point of view of its own self-proclaimed values, specious organization. Making this claim is consistent with Césaire’s insight that people of the third world built the European metropoles. Today’s disappearance of the human might mean for the colonizers who invested so heavily in their humanisms, that Dr. Moreau’s vivisectioned cyber-chickens are coming home to roost. Fatally, it seems, since Global North immigration policy, internment centers, border walls, police forces give the lie to any pretense of humanism. It might be gleaned that the revolution against the humans has also been impacted by our machines. However, the POTUSian defeat of the so-called humans is double-edged to say the least. The dialectic of posthuman abundance on the one hand and the posthuman abundance of dispossession on the other has no truck with humanity. Today’s mainstream futurologists mostly see “the singularity” and apocalypse. Critics of the posthuman with commitments to anti-racist world-making have clearly understood the dominant discourse on the posthuman as not the end of the white liberal human subject but precisely, when in the hands of those not committed to an anti-racist and decolonial project as a means for its perpetuation—a way of extending the unmarked, transcendental, sovereign, subject (of Hobbes, Descartes, C.B. Macpherson)—effectively the white male sovereign who was in possession of a body rather than forced to be a body. Sovereignty itself must change (in order, as Guiseppe Lampedusa taught us, to remain the same), for if one sees production and innovation on the side of labor, then capital’s need to contain labors’ increasing self-organization has driven it into a position where the human has become an impediment to its continued expansion. Human rights, though at times also a means to further expropriation, are today in the way.

    Let’s say that it is global labor that is shaking off the yoke of the human from without, as much as it the digital machines that are devouring it from within. The dialectic of computational racial capital devours the human as a way of revolutionizing the productive forces. Weapon-makers, states, and banks, along with Hollywood and student debt, invoke the human only as a skeuomorph—an allusion to an old technology that helps facilitate adoption of the new. Put another way, the human has become a barrier to production, it is no longer a sustainable form. The human, and those (human and otherwise) falling under the paradigm’s dominion, must be stripped, cut, bundled, reconfigured in derivative forms. All hail the dividual. Again, female and racialized bodies and subjects have long endured this now universal fragmentation and forced recomposition and very likely dividuality may also describe a precapitalist, pre-colonial interface with the social. However we are obliged to point out that this, the current dissolution of the human into the infrastructure of the world-system, is double-edged, neither fully positive, nor fully negative—the result of the dialectics of struggles for liberation distributed around the planet. As a sign of the times, posthumanism may be, as has been remarked about capitalism itself, among those simultaneously best and worst things to ever happen in history. On the one hand, the disappearance of presumably ontological protections and legitimating status for some (including the promise of rights never granted to most), on the other, the disappearance of a modality of dehumanization and exclusion that legitimated and normalized white supremacist patriarchy by allowing its values to masquerade as universals. However, it is difficult to maintain optimism of the will when we see that that which is coming, that which is already upon us may also be as bad or worse, in absolute numbers, is already worse, for unprecedented billions of concrete individuals. Frankly, in a world where the cognitive-linguistic functions of the species have themselves been captured by the ambient capitalist computation of social media and indeed of capitalized computational social relations, of what use is a theory of dispossession to the dispossessed?

    For those of us who may consider ourselves thinkers, it is our burden—in a real sense, our debt, living and ancestral—to make theory relevant to those who haunt it. Anything less is betrayal. The emergence of the universal value form (as money, the general form of wealth) with its human face (as white-maleness, the general form of humanity) clearly inveighs against the possibility of extrinsic valuation since the very notion of universal valuation is posited from within this economy. What Cedric Robinson shows in his extraordinary Black Marxism (1983) is that capitalism itself is a white mythology. The history of racialization and capitalization are inseparable, and the treatment of capital as a pure abstraction deracinates its origins and functions – both its conditions of possibility as well as its operations—including those of the internal critique of capitalism that has been the basis of much of the Marxist tradition. Both capitalism and its negation as Marxism have proceeded through a disavowal of racialization. The quantitative exchange of equivalents, circulating as exchange values without qualities, are the real abstractions that give rise to philosophy, science, and white liberal humanism wedded to the notion of the objective. Therefore, when it comes to values, there is no degree zero, only perhaps nodal points of bounded equilibrium. To claim neutrality for an early digital machine, say, money, that is, to argue that money as a medium is value-neutral because it embodies what has (in many respects correctly, but in a qualified way) been termed “the universal value form,” would be to miss the entire system of leveraged exploitation that sustains the money-system. In an isolated instance, money as the product of capital might be used for good (building shelters for the homeless) or for ill (purchasing Caterpillar bulldozers) or both (building shelters using Caterpillar machines), but not to see that the capitalist-system sustains itself through militarized and policed expropriation and large-scale, long-term universal degradation is to engage in mere delusional, utopianism and self-interested (might one even say psychotic?) naysaying.

    Will the apologists calmly bear witness to the sacrifice of billions of human beings so that the invisible hand may placidly unfurl its/their abstractions in Kubrikian sublimity? 2001’s (Kubrick 1968) cold longshot of the species lifespan as an instance of a cosmic program is not so distant from the endemic violence of postmodern—and, indeed, post-human—fascism he depicted in A Clockwork Orange (Kubrick 1971). Arguably, 2001 rendered the cosmology of early Posthuman Fascism while A Clockwork Orange portrayed its psychology. Both films explored the aesthetics of programming. For the individual and for the species, what we beheld in these two films was the annihilation of our agency (at the level of the individual and of the species) —and it was eerily seductive, Benjamin’s self-destruction as an aesthetic pleasure of the highest order taken to cosmic proportions and raised to the level of Art (1969).

    So what of the remainders of those who may remain? Here, in the face of the annihilation of remaindered life (to borrow a powerfully dialectical term from Neferti Tadiar, 2016) by various iterations of techné, we are posing the following question: how are computers and digital computing, as universals, themselves an iteration of long-standing historical inequality, violence, and murder, and what are the entry points for an understanding of computation-society in which our currently pre-historic (in Marx’s sense of the term) conditions of computation might be assessed and overcome? This question of technical overdetermination is not a matter of a Kittlerian-style anti-humanism in which “media determine our situation,” nor is it a matter of the post-Kittlerian, seemingly user-friendly repurposing of dialectical materialism which in the beer-drinking tradition of “good-German” idealism, offers us the poorly historicized, neo-liberal idea of “cultural techniques” courtesy of Cornelia Vismann and Bernhard Siegert (Vismann 2013, 83-93; Siegert 2013, 48-65). This latter is a conveniently deracinated way of conceptualizing the distributed agency of everything techno-human without having to register the abiding fundamental antagonisms, the life and death struggle, in anything. Rather, the question I want to pose about computing is one capable of both foregrounding and interrogating violence, assigning responsibility, making changes, and demanding reparations. The challenge upon us is to decolonize computing. Has the waning not just of affect (of a certain type) but of history itself brought us into a supposedly post-historical space? Can we see that what we once called history, and is now no longer, really has been pre-history, stages of pre-history? What would it mean to say in earnest “What’s past is prologue?”[6] If the human has never been and should never be, if there has been this accumulation of negative entropy first via linear time and then via its disruption, then what? Postmodernism, posthumanism, Flusser’s post-historical, and Berardi’s After the Future notwithstanding, can we take the measure of history?

    Hollerith punch card (image source: Library of Congress, http://memory.loc.gov/mss/mcc/023/0008.jpg)
    Figure 1. Hollerith punch card (image source: Library of Congress)

    Techno-Humanist Dehumanization

    I would like to conclude this essay with a few examples of techno-humanist dehumanization. In 1889, Herman Hollerith patented the punchcard system and mechanical tabulator that was used in the 1890 censuses in Germany, England, Italy, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines. A national census, which normally took eight to ten years now took a single year. The subsequent invention of the plugboard control panel in 1906 allowed for tabulators to perform multiple sorts in whatever sequence was selected without having to be rebuild the tabulators—an early form of programming. Hollerith’s Tabulating Machine Company merged with three other companies in 1911 to become the Computing Tabulating Recording Company, which renamed itself IBM in 1924.

    While the census opens a rich field of inquiry that includes questions of statistics, computing, and state power that are increasingly relevant today (particularly taking into account the ever-presence of the NSA), for now I only want to extract two points: 1) humans became the fodder for statistical machines and 2) as Vince Rafael has shown regarding the Philippine census and as Edwin Black has shown with respect to the holocaust, the development of this technology was inseparable from racialization and genocide (Rafael 2000; Black 2001)

    Rafael shows that coupled to photographic techniques, the census at once “discerned” and imposed a racializing schema that welded historical “progress” to ever-whiter waves of colonization, from Malay migration to Spanish Colonialism to U.S. Imperialism (2000) Racial fantasy meets white mythology meets World Spirit. For his part, Edwin Black (2001) writes:

    Only after Jews were identified—a massive and complex task that Hitler wanted done immediately—could they be targeted for efficient asset confiscation, ghettoization, deportation, enslaved labor, and, ultimately, annihilation. It was a cross-tabulation and organizational challenge so monumental, it called for a computer. Of course, in the 1930s no computer existed.

    But IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his persecution of the Jews. Historians have always been amazed at the speed and accuracy with which the Nazis were able to identify and locate European Jewry. Until now, the pieces of this puzzle have never been fully assembled. The fact is, IBM technology was used to organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor.

    IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards Hitler needed (Black 2001).

    The sorting of populations and individuals by forms of social difference including “race,” ability and sexual preference (Jews, Roma, homosexuals, people deemed mentally or physically handicapped) for the purposes of sending people who failed to meet Nazi eugenic criteria off to concentration camps to be dispossessed, humiliated, tortured and killed, means that some aspects of computer technology—here, the Search Engine—emerged from this particular social necessity sometimes called Nazism (Black 2001). The Philippine-American War, in which Americans killed between 1/10th and 1/6th of the population of the Philippines, and the Nazi-administered holocaust are but two world historical events that are part of the meaning of early computational automation. We might say that computers bear the legacy of imperialism and fascism—it is inscribed in their operating systems.

    The mechanisms, as well as the social meaning of computation, were refined in its concrete applications. The process of abstraction hid the violence of abstraction, even as it integrated the result with economic and political protocols and directly effected certain behaviors. It is a well-known fact that Claude Shannon’s landmark paper, “A Mathematical Theory of Communication,” proposed a general theory of communication that was content-indifferent (1948, 379-423). This seminal work created a statistical, mathematical model of communication while simultaneously consigning any and all specific content to irrelevance as regards the transmission method itself. Like use-value under the management of the commodity form, the message became only a supplement to the exchange value of the code. Elsewhere I have more to say about the fact that some of the statistical information Shannon derived about letter frequency in English used as its ur-text, Jefferson The Virginian (1948), the first volume of Dumas Malone’s monumental six volume study of Jefferson, famously interrogated by Annette Gordon-Reed in her Thomas Jefferson and Sally Hemmings: An American Controversy for its suppression of information regarding Jefferson’s relation to slavery (1997).[7] My point here is that the rules for content indifference were themselves derived from a particular content and that the language used as a standard referent was a specific deployment of language. The representative linguistic sample did not represent the whole of language, but language that belongs to a particular mode of sociality and racialized enfranchisement. Shannon’s deprivileging of the referent of the logos as referent, and his attention only to the signifiers, was an intensification of the slippage of signifier from signified (“We, the people…”) already noted in linguistics and functionally operative in the elision of slavery in Jefferson’s biography, to say nothing of the same text’s elision of slave-narrative and African-American speech. Shannon brilliantly and successfully developed a re-conceptualization of language as code (sign system) and now as mathematical code (numerical system) that no doubt found another of its logical (and material) conclusions (at least with respect to metaphysics) in post-structuralist theory and deconstruction, with the placing of the referent under erasure. This recession of the real (of being, the subject, and experience—in short, the signified) from codification allowed Shannon’s mathematical abstraction of rules for the transmission of any message whatsoever to become the industry standard even as they also meant, quite literally, the dehumanization of communication—its severance from a people’s history.

    In a 1987 interview, Shannon was quoted as saying “I can visualize a time in the future when we will be to robots as dogs are to humans…. I’m rooting for the machines!” (1971). If humans are the robot’s companion species, they (or is it we?) need a manifesto. The difficulty is that the labor of our “being” such that it is/was is encrypted in their function. And “we” have never been “one.”

    Tara McPherson has brilliantly argued that the modularity achieved in the development of UNIX has its analogue in racial segregation. Modularity and encapsulation, necessary to the writing of UNIX code that still underpins contemporary operating systems were emergent general socio-technical forms, what we might call technologies, abstract machines, or real abstractions. “I am not arguing that programmers creating UNIX at Bell Labs and at Berkeley were consciously encoding new modes of racism and racial understanding into digital systems,” McPherson argues, “The emergence of covert racism and its rhetoric of colorblindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems and it seems at best naïve to imagine that cultural and computational operating systems don’t mutually infect one another.” (in Nakamura 2012, 30-31; italics in original)

    This is the computational unconscious at work—the dialectical inscription and re-inscription of sociality and machine architecture that then becomes the substrate for the next generation of consciousness, ad infinitum. In a recent unpublished paper entitled “The Lorem Ipsum Project,” Alana Ramjit (2014) examines industry standards for the now-digital imaging of speech and graphic images. These include Kodak’s “Shirley cards” for standard skin tone (white), the Harvard Sentences for standard audio (white), the “Indian Head Test Pattern” for standard broadcast image (white fetishism), and “Lenna,” an image of Lena Soderberg taken from Playboy magazine (white patriarchal unconscious) that has become the reference standard image for the development of graphics processing. Each of these examples testifies to an absorption of the socio-historical at every step of mediological and computational refinement.

    More recently, as Chris Vitale, brought out in a powerful presentation on machine learning and neural networks given at Pratt Institute in 2016, Facebook’s machine has produced “Deep Face,” an image of the minimally recognizable human face. However, this ur-human face, purported to be, the minimally recognizable form of the human face turns out to be a white guy. This is a case in point of the extension of colonial relations into machine function. Given the racialization of poverty in the system of global apartheid (Federici 2012), we have on our hands (or, rather, in our machines) a new modality of automated genocide. Fascism and genocide have new mediations and may not just have adapted to new media but may have merged. Of course, the terms and names of genocidal regeimes change, but the consequences persist. Just yesterday it was called neo-liberal democracy. Today it’s called the end of neo-liberalism. The current world-wide crisis in migration is one of the symptoms of the genocidal tendencies of the most recent coalescence of the “practically” automated logistics of race, nation and class. Today racism is at once a symptom of the computational unconscious, an operation of non-conscious cognition, and still just the garden variety self-serving murderous stupidity that is the legacy of slavery, settler colonialism and colonialism.

    Thus we may observe that the statistical methods utilized by IBM to find Jews in the Shtetl are operative in Weiner’s anti-aircraft cybernetics as well as in Israel’s Iron Dome missile defense system. But, the prevailing view, even if it is not one of pure mathematical abstraction, in which computational process has its essence without reference to any concrete whatever, can be found in what follows. As an article entitled “Traces of Israel’s Iron Dome can be found in Tech Startups” for Bloomberg News almost giddily reports:

    The Israeli-engineered Iron Dome is a complex tapestry of machinery, software and computer algorithms capable of intercepting and destroying rockets midair. An offshoot of the missile-defense technology can also be used to sell you furniture. (Coppola 2014)[8]

    Not only is war good computer business, it’s good for computerized business. It is ironic that te is likened to a tapestry and now used to sell textiles – almost as if it were haunted by Lisa Nakamura’s recent findings regarding the (forgotten) role of Navajo women weavers in the making of early transistor’s for Silicon Valley legend and founding father, as well as infamous eugenicist, William Shockley’s company Fairchild.[9] The article goes on to confess that the latest consumer spin-offs that facilitate the real-time imaging of couches in your living room capable of driving sales on the domestic fronts exist thanks to the U. S. financial support for Zionism and its militarized settler colonialism in Palestine. “We have American-backed apartheid and genocide to thank for being able to visualize a green moderne couch in our very own living room before we click “Buy now.”” (Okay, this is not really a quotation, but it could have been.)

    Census, statistics, informatics, cryptography, war machines, industry standards, markets—all management techniques for the organization of otherwise unruly humans, sub-humans, posthumans and nonhumans by capitalist society. The ethos of content indifference, along with the encryption of social difference as both mode and means of systemic functionality is sustainable only so long as derivative human beings are themselves rendered as content providers, body and soul. But it is not only tech spinoffs from the racist war dividends we should be tracking. Wendy Chun (2004, 26-51) has shown in utterly convincing ways that the gendered history of the development of computer programming at ENIAC in which male mathematicians instructed female programmers to physically make the electronic connections (and remove any bugs) echoes into the present experiences of sovereignty enjoyed by users who have, in many respects, become programmers (even if most of us have little or no idea how programming works, or even that we are programming).

    Chun notes that “during World War II almost all computers were young women with some background in mathematics. Not only were women available for work then, they were also considered to be better, more conscientious computers, presumably because they were better at repetitious, clerical tasks” (Chun 2004, 33)  One could say that programming became programming and software became software when commands shifted from commanding a “girl” to commanding a machine. Clearly this puts the gender of the commander in question.

    Chun suggests that the augmentation of our power through the command-control functions of computation is a result of what she calls the “Yes sir” of the feminized operator—that is, of servile labor (2004). Indeed, in the ENIAC and other early machines the execution of the operator’s order was to be carried out by the “wren” or the “slave.” For the desensitized, this information may seem incidental, a mere development or advance beyond the instrumentum vocale (the “speaking tool” i.e., a roman term for “slave”) in which even the communicative capacities of the slave are totally subordinated to the master. Here we must struggle to pose the larger question: what are the implications for this gendered and racialized form of power exercised in the interface? What is its relation to gender oppression, to slavery? Is this mode of command-control over bodies and extended to the machine a universal form of empowerment, one to which all (posthuman) bodies might aspire, or is it a mode of subjectification built in the footprint of domination in such a way that it replicates the beliefs, practices and consequences of  “prior” orders of whiteness and masculinity in unconscious but nonetheless murderous ways.[10] Is the computer the realization of the power of a transcendental subject, or of the subject whose transcendence was built upon a historically developed version of racial masculinity based upon slavery and gender violence?

    Andrew Norman Wilson’s scandalizing film Workers Leaving the Googleplex (2011), the making of which got him fired from Google, depicts lower class, mostly of color workers leaving the Google Mountain View campus during off hours. These workers are the book scanners, and shared neither the spaces nor the perks with Google white collar workers, had different parking lots, entrances and drove a different class of vehicles. Wilson also has curated and developed a set of images that show the condom-clad fingers (black, brown, female) of workers next to partially scanned book pages. He considers these mis-scans new forms of documentary evidence. While digitization and computation may seem to have transcended certain humanistic questions, it is imperative that we understand that its posthumanism is also radically untranscendent, grounded as it is on the living legacies of oppression, and, in the last instance, on the radical dispossession of billions. These billions are disappeared, literally utilized as a surface of inscription for everyday transmissions. The dispossessed are the substrate of the codification process by the sovereign operators commanding their screens. The digitized, rewritable screen pixels are just the visible top-side (virtualized surface) of bodies dispossessed by capital’s digital algorithms on the bottom-side where, arguably, other metaphysics still pertain. Not Hegel’s world spirit—whether in the form of Kurzweil’s singularity or Tegmark’s computronium—but rather Marx’s imperative towards a ruthless critique of everything existing can begin to explain how and why the current computational eco-system is co-functional with the unprecedented dispossession wrought by racial computational capitalism and its system of global apartheid. Racial capitalism’s programs continue to function on the backs of those consigned to servitude. Data-visualization, whether in the form of selfie, global map, digitized classic or downloadable sound of the Big Bang, is powered by this elision. It is, shall we say, inescapably local to planet earth, fundamentally historical in relation to species emergence, inexorably complicit with the deferral of justice.

    The Global South, with its now world-wide distribution, is endemic to the geopolitics of computational racial capital—it is one of its extraordinary products. The computronics that organize the flow of capital through its materials and signs also organize the consciousness of capital and with it the cosmological erasure of the Global South. Thus the computational unconscious names a vast aspect of global function that still requires analysis. And thus we sneak up on the two principle meanings of the concept of the computational unconscious. On the one hand, we have the problematic residue of amortized consciousness (and the praxis thereof) that has gone into the making of contemporary infrastructure—meaning to say, the structural repression and forgetting that is endemic to the very essence of our technological buildout. On the other hand, we have the organization of everyday life taking place on the basis of this amortization, that is, on the basis of a dehistoricized, deracinated relation to both concrete and abstract machines that function by virtue of the fact that intelligible history has been shorn off of them and its legibility purged from their operating systems. Put simply, we have forgetting, the radical disappearance and expunging from memory, of the historical conditions of possibility of what is. As a consequence, we have the organization of social practice and futurity (or lack thereof) on the basis of this encoded absence. The capture of the general intellect means also the management of the general antagonism. Never has it been truer that memory requires forgetting – the exponential growth in memory storage means also an exponential growth in systematic forgetting – the withering away of the analogue. As a thought experiment, one might imagine a vast and empty vestibule, a James Ingo Freed global holocaust memorial of unprecedented scale, containing all the oceans and lands real and virtual, and dedicated to all the forgotten names of the colonized, the enslaved, the encamped, the statisticized, the read, written and rendered, in the history of computational calculus—of computer memory. These too, and the anthropocene itself, are the sedimented traces that remain among the constituents of the computational unconscious.

    _____

    Jonathan Beller is Professor of Humanities and Media Studies and Director of the Graduate Program in Media Studies at Pratt Institute. His books include The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle (2006); Acquiring Eyes: Philippine Visuality, Nationalist Struggle, and the World-Media System (2006); and The Message Is Murder: Substrates of Computational Capital (2017). He is a member of the Social Text editorial collective..

    Back to the essay

    _____

    Notes

    [1] A reviewer of this essay for b2o: An Online Journal notes, “the phrase ‘digital computer’ suggests something like the Turing machine, part of which is characterized by a second-order process of symbolization—the marks on Turing’s tape can stand for anything, & the machine processing the tape does not ‘know’ what the marks ‘mean.’” It is precisely such content indifferent processing that the term “exchange value,” severed as it is of all qualities, indicates.

    [2] It should be noted that the reverse is also true: that race and gender can be considered and/as technologies. See Chun (2012), de Lauretis (1987).

    [3] To insist on first causes or a priori consciousness in the form of God or Truth or Reality is to confront Marx’s earlier acerbic statement against a form of abstraction that eliminates the moment of knowing from the known in The Economic and Philosophic Manuscripts of 1844,

    Who begot the first man and nature as a whole? I can only answer you: Your question is itself a product of abstraction. Ask yourself how you arrived at that question. Ask yourself it that question is not posed from a standpoint to which I cannot reply, because it is a perverse one. Ask yourself whether such a progression exists for a rational mind. When you ask about the creation of nature and man you are abstracting in so doing from man and nature. You postulate them as non-existent and yet you want me to prove them to you as existing. Now I say give up your abstraction and you will give up your question. Or, if you want to hold onto your abstraction, then be consistent, and if you think of man and nature as non-existent, then think of yourself as non-existent, for you too are surely man and nature. Don’t think, don’t ask me, for as soon as you think and ask, your abstraction from the existence of nature and man has no meaning. Or are you such an egoist that you postulate everything as nothing and yet want yourself to be?” (Tucker 1978, 92)

    [4] If one takes the derivative of computational process at a particular point in space-time one gets an image. If one integrates the images over the variables of space and time, one gets a calculated exploit, a pathway for value-extraction. The image is a moment in this process, the summation of images is the movement of the process.

    [5] See Harney and Moten (2013). See also Browne (2015), especially 43-50.

    [6] In practical terms, the Alternative Informatics Association, in the announcement for their Internet Ungovernance Forum puts things as follows:

    We think that Internet’s problems do not originate from technology alone, that none of these problems are independent of the political, social and economic contexts within which Internet and other digital infrastructures are integrated. We want to re-structure Internet as the basic infrastructure of our society, cities, education, heathcare, business, media, communication, culture and daily activities. This is the purpose for which we organize this forum.

    The significance of creating solidarity networks for a free and equal Internet has also emerged in the process of the event’s organization. Pioneered by Alternative Informatics Association, the event has gained support from many prestigious organizations worldwide in the field. In this two-day event, fundamental topics are decided to be ‘Surveillance, Censorship and Freedom of Expression, Alternative Media, Net Neutrality, Digital Divide, governance and technical solutions’. Draft of the event’s schedule can be reached at https://iuf.alternatifbilisim.org/index-tr.html#program (Fidaner, 2014).

    [7] See Beller (2016, 2017).

    [8] Coppola writes that “Israel owes much of its technological prowess to the country’s near—constant state of war. The nation spent $15.2 billion, or roughly 6 percent of gross domestic product, on defense last year, according to data from the International Institute of Strategic Studies, a U.K. think-tank. That’s double the proportion of defense spending to GDP for the U.S., a longtime Israeli ally. If there’s one thing the U.S. Congress can agree on these days, it’s continued support for Israel’s defense technology. Legislators approved $225 million in emergency spending for Iron Dome on Aug. 1, and President Barack Obama signed it into law three days later.”

    [9] Nakamura (2014).

    [10] For more on this, see Eglash (2007).

    _____

    Works Cited

    • Althusser, Louis. 1977. Lenin and Philosophy. London: NLB.
    • Azoulay, Ariella. 2012. Civil Imagination: A Political Ontology of Photography. London: Verso.
    • Beller, Jonathan. 2006. Cinematic Mode of Reproduction. Hanover, NH: Dartmouth College Press.
    • Beller, Jonathan. 2016. “Fragment from The Message is Murder.” Social Text 34:3. 137-152.
    • Beller, Jonathan. 2017. The Message is Murder: Substrates of Computational Capital. London: Pluto Press.
    • Benjamin, Walter. 1969. Illuminations. Schocken Books.
    • Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers.
    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
    • Césaire, Aimé. 1972. Discourse on Colonialism New York: Monthly Review Press.
    • Coppola, Gabrielle. 2014. “Traces of Israel’s Iron Dome Can Be Found in Tech Startups.” Bloomberg News (Aug 11).
    • Chun, Wendy Hui Kyong. 2004. “On Software, or the Persistence of Visual Knowledge,” Grey Room 18, Winter: 26-51.
    • Chun, Wendy Hui Kyong. 2012. In Nakamura and Chow-White (2012). 38-69.
    • De Lauretis, Teresa. 1987. Technologies of Gender: Essays on Theory, Film, and Fiction Bloomington, IN: Indiana University Press.
    • Dyer-Witheford, Nick. 2012. “Red Plenty Platforms.” Culture Machine 14.
    • Eglash, Ron. 2007. “Broken Metaphor: The Master-Slave Analogy in Technical Literature.” Technology and Culture 48:3. 1-9.
    • Federici, Sylvia. 2012. Revolution at Point Zero. PM Press.
    • Fidaner, Işık Barış. 2014. Email broadcast on ICTs and Society listserv (Aug 29).
    • Flusser, Vilém. 2000. Towards a Philosophy of Photography. London: Reaktion Books.
    • Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge: Harvard University Press.
    • Gordon-Reed, Annette. 1998. Thomas Jefferson and Sally Hemmings: An American Controversy. Charlottesville: University of Virginia Press.
    • Harney, Stefano and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Hayles, Katherine N. 2016. “The Cognitive NonConscious.” Critical Inquiry 42:4. 783-808.
    • Jameson, Fredric. 1981. The Political Unconscious: Narrative as a Socially Symbolic Act. Ithaca: Cornell University Press.
    • Hofstadter, Douglas. 1979. Godel, Escher, Bach: An Eternal Golden Braid. New York: Penguin Books.
    • Kubrick, Stanley, dir. 1968. 2001: A Space Odyssey. Film.
    • Kubrick, Stanley, dir. 1971. A Clockwork Orange. Film.
    • Liu, Lydia He. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Luhmann, Niklas. 1989. Ecological Communication. Chicago: University of Chicago Press.
    • McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In Nakamura and Chow-White (2012). 21-37.
    • Maturana Humberto and Francisco Varela. 1992. The Tree of Knowledge: The Biological Roots of Human Understanding. Shambala.
    • Malone, Dumas. 1948. Jefferson The Virginian. Boston: Little, Brown and Company.
    • Marx, Karl and Fredrick Engels. 1986. Collected Works, Vol. 28, New York: International Publishers.
    • Marx, Karl and Fredrick Engels. 1978. The German Ideology. In The Marx Engels Reader, 2nd edition. Edited by Robert C. Tucker. NY: Norton.
    • Mulvey, Laura. 1975. “Visual Pleasure and Narrative Cinema.” Screen 16:3. 6-18.
    • Nakamura, Lisa. 2014. “Indigenous Circuits.” Computer History Museum (Jan 2).
    • Nakamura, Lisa and Peter A. Chow-White, eds. 2012. Race After the Internet. New York: Routledge.
    • Siegert, Bernhard. 2013.“Cultural Techniques: Or the End of Intellectual Postwar Ear in German Media Theory.” Theory Culture and Society 30. 48-65.
    • Shannon, Claude. 1948 “A Mathematical Theory of Communication.” The Bell System Technical Journal. July: 379-423; October: 623-656.
    • Shannon, Claude and Warren Weaver. 1971. The Mathematical Theory of Communication. Chicago: University of Illinois Press.
    • Rafael, Vicente. 2000. White Love: And Other Events in Filipino History. Durham: Duke University Press.
    • Ramjit, Alana. 2014. “The Lorem Ipsum Project.” Unpublished Manuscript.
    • Rebooting the Cosmos: Is the Universe the Ultimate Computer? [Replay].” 2011. “In-Depth Report: The World Science Festival 2011: Encore Presentations and More.” Scientific American.
    • Robinson, Cedric. 1983. Black Marxism: The Making of the Black Radical Tradition. Chapel Hill: The University of North Carolina Press.
    • Tadiar, Neferti. 2016. “City Everywhere.” Theory, Culture and Society 33:7-8. 57-83.
    • Virno, Paolo. 2004. A Grammar of the Multitude. New York: Semiotext(e).
    • Vismann, Cornelia. 2013. “Cultural Techniques and Sovereignty.” Theory, Culture and Society 30. 83-93.
    • Weiner, Norbert. 1989. The Human Use of Human Beings: Cybernetics and Society. London: Free Association Books.
    • Wilson, Andrew Norman, dir. 2011. “Workers Leaving the Googleplex.” Video.

     

  • Gavin Mueller — Digital Proudhonism

    Gavin Mueller — Digital Proudhonism

    Gavin Mueller

    In a passage from his 2014 book Information Doesn’t Want to Be Free author and copyright reformer Cory Doctorow sounds a familiar note against strict copyright. “Creators and investors lose control of their business—they become commodity suppliers for a distribution channel that calls all the shots. Anti-circumvention [laws such as the Digital Millennium Copyright Act, which prohibits subverting controls on the intended use of digital objects] isn’t copyright protection, it’s middleman protection” (50).

    This is the specter haunting the digital cultural economy, according to many of the most influential voices arguing to reform or disrupt it: the specter of the middleman, the monopolist, the distortionist of markets. Rather than an insurgency, this specter emanates from economic incumbency: these middlemen are the culture industries themselves. With the dual revolutions of personal computer and internet connection, record labels, book publishers, and movie studios could maintain their control and their profits only by asserting and strengthening intellectual property protections and squelching the new technologies that subverted them. Thus, these “monopolies” of cultural production threatened to prevent individual creators from using technology to reach their audiences independently.

    Such a critique became conventional wisdom among a rising tide of people who had become accustomed to using the powers of digital technology to copy and paste in order to produce and consume cultural texts, beginning with music. It was most comprehensively articulated in a body of arguments, largely produced by technology evangelists and tech-aligned legal professionals, hailing from the Free Culture movement spearheaded by Lawrence Lessig. The critique’s practical form was the host of piratical activities and peer-to-peer technologies that, in addition to obviating traditional distribution chains, dedicated themselves to attacking culture industries, as well as their trade organizations such as the Recording Industry Association of America (RIAA) and the Motion Picture Association of America (MPAA).

    Connected to this critique is an alternate vision of the digital economy, one that leverages new technological commons, peer production and network effects to empower creators. This vision has variations, and travels under a number of different political banners, from anarchist to libertarian to liberal and many more who prefer not to label.[1] It tells a compelling story (one Doctorow has adapted into novels for young people): against corporate monopolists and state regulation, a multitude, empowered by the democratizing effects bequeathed to society by networked personal computers, and other technologies springing from them, is posed to revolutionize the production of media and information, and, therefore, the political and economic structure as a whole. Work will be small-scale and independent, but, bereft of corporate behemoths, more lucrative than in the past.

    This paper traces the contours of the critique put forth by Doctorow and other revolutionaries of networked digital production in light of a nineteenth-century thinker who espoused remarkably similar arguments over a century ago: the French anarchist Pierre-Joseph Proudhon. Few of these writers are evident readers of Proudhon or explicitly subscribe to his views, though some, such as the Center for Stateless Society do. Rather than a formal doctrine, what I call “Digital Proudhonism” is better understood as what Raymond Williams (1977) calls a “structure of feeling”: a kind of “practical consciousness” that identifies “meanings and values as they are actively lived and felt” (132), in this case, related to specific experiences of networked computer use. In the case under discussion these “affective elements of consciousness and relationships” are often articulated in a political, or at least polemical, register, with real effects on the political self-understanding of networked subjects, the projects they pursue, and their relationship to existing law, policy and institutions. Because of this, I seek to do more than identify currents of contemporary Digital Proudhonism. I maintain that the influence of this set of practices and ideas over the politics of digital production necessitates a critique. In this case, I argue that a return to Marx’s critique of Proudhon will aid us in piercing through the Digital Proudhonist mystifications of the Internet’s effects on politics and industry and reformulate both a theory of cultural production under digital capitalism as well as radical politics of work and technology for the 21st century.

    From the Californian Ideology to Digital Proudhonism

    What I am calling Digital Proudhonism has precedent in the social critique of techno-utopian beliefs surrounding the internet. It echoes Langdon Winner’s (1997) diagnosis of “cyberlibertarianism” in the Progress and Freedom Foundation’s 1994 manifesto “Magna Carta for the Knowledge Age,” where “the wedding of digital technology and the free market” manages to “realize the most extravagant ideals of classical communitarian anarchism” (15). Above all, it bears a marked resemblance to Barbrook and Cameron’s (1996) landmark analysis of the “Californian Ideology,” that “bizarre mish-mash of hippie anarchism and economic liberalism beefed up with lots of technological determinism” emerging from the Wired (in the sense of the magazine) corners of the rise of networked computers, which claims that digital technology is the key to realizing freedom and autonomy (56). As the authors put it, “the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through profound faith in the emancipatory potential of new information technologies” (45).

    My contribution will follow the argument of Barbrook and Cameron’s exemplary study. As good Marxists, they recognized that ideology was not merely an abstract belief system, but “offers a way of understanding the lived reality” (50) of a specific social base: “digital artisans” of programmers, software developers, hackers and other skilled technology workers who “not only tend to be well-paid, but also have considerable autonomy over their pace of work and place of employment” (49). Barbrook and Cameron located the antecedents of the Californian Ideology in Thomas Jefferson’s belief that democracy was best secured by self-sufficient individual farmers, a kind of freedom that, as the authors trenchantly note, “was based upon slavery for black people” (59).

    Thomas Jefferson is an oft-cited figure among the digital revolutionaries associated with copyright reform. Law professor James Boyle (2008) drafts Jefferson into the Free Culture movement as a fellow traveler who articulated “a skeptical recognition that intellectual property rights might be necessary, a careful explanation that they should not be treated as natural rights, and a warning of the monopolistic dangers that they pose” (21). Lawrence Lessig cites Jefferson’s remarks on intellectual property approvingly in Free Culture (2004, 84). “Thomas Jefferson and the other Founding Fathers were thoughtful, and got it right,” states Kembrew McLeod (2005) in his discussion of the U.S. Constitution’s clauses on patent and copyright (9).

    There is a deeper political and economic resonance between Jefferson and internet activists beyond his views on intellectual property. Jefferson’s ideal productive arrangement of society was small individual landowners and petty producers: the yeoman farmer. Jefferson believed that individual self-sufficiency guaranteed a democratic society. The abundance of land in the New World and the willingness to expropriate it from the indigenous peoples living there gave his fantasy a plausibility and attraction many Americans still feel today. It was this vision of America as a frontier, an empty space waiting to be filled by new social formations, that makes his philosophy resonate with the techno-adept described by Barbrook and Cameron, who viewed the Internet in a similar way. One of these Californians, John Perry Barlow (1996), who famously declared to “governments of the Industrial World” that “cyberspace does not lie within your borders,” even co-founded an organization dedicated to a deregulated internet called the “Electronic Frontier Foundation.”

    However, not everything online lent itself to the metaphor of a frontier. Particularly in the realm of music and video, artisans dealt with a field crowded with existing content, as well as thickets of intellectual property laws that attempted to regulate how that content was created and distributed. There could be no illusion of a blank canvas on which to project one’s ideal society: in fact, these artisans were noteworthy, not for producing work independently out of whole cloth, but for refashioning existing works through remix. Lawrence Lessig (2004) quotes mashup artist Girl Talk: “We’re living in this remix culture. This appropriation time where any grade-school kid has a copy of Photoshop and can download a picture of George Bush and manipulate his face how they want and send it to their friends” (14). The project of Lessig and others was not to create the conditions for erecting a new society upon a frontier, as a yeoman farmer might, but to politicize this class of artisans in order to challenge larger industrial concerns, such as record labels and film studios, who used copyright to protect their incumbent position. This very different terrain requires a different perspective from Jefferson’s.

    Thomas Jefferson’s vision is not the only expression of the fantasy of a society built on the basis of petty producers. In nineteenth-century Europe, where most land had long been tied up in hereditary estates, large and small, the yeoman farmer ideal held far less influence. Without a belief in abundant land, there could be no illusion of a blank canvas on which a new society could be created: some kind of revolutionary change would have to occur within and against the old one. And so a similar, yet distinct, political philosophy sprang up in France among a similar social base of artisans and craftsmen—those who tended to control their own work process and own their own tools—who made up a significant part of the French economy. As they were used to an individualized mode of production, they too believed that self-sufficiency guaranteed liberty and prosperity. The belief that society should be organized along the lines of petty individual commodity producers, without interference from the state—a belief remarkably consonant with a variety of digital utopians—found its most powerful expression in the ideas of Pierre-Joseph Proudhon. It is to his ideas that I now turn.

    What was Proudhonism?

    An anarchist and influential member of the International Workingmen’s Association of which Karl Marx was also a part, Proudhon’s ideas were especially popular in his native France, where the economy was rooted far more deeply in small-scale artisanal production than the industrial-scale capitalism Marx experienced in Britain. His first major work, What Is Property? ([1840] 2011) (Proudhon’s pithy answer: property is theft) caught the attention of Marx, who admired the work’s thrust and style, even while he criticized its grasp of the science of political economy. After attempting to win over Proudhon by teaching him political economy and Hegelian dialectics, Marx became a vehement critic of Proudhon’s ideas, which held more sway over the First International than Marx’s own.

    Proudhon was critical of the capitalism of his day, but made his criticisms, along with his ideas for a better society, from the perspective of a specific class. Rather than analyze, as Marx did, the contradictions of capitalism through the figure of the proletarian, who possesses nothing but their own capacity to work, Proudhon understood capitalism from the perspective of an artisanal small producer, who owns and labors with their own small-scale means of production. In David McNally’s (1993) survey of eighteenth- and nineteenth-century radical political economy, he summarizes Proudhon’s beliefs. Proudhon “envisages a society [of] small independent producers—peasants and artisans—who own the products of their personal labour, and then enter into a series of equal market exchanges. Such a society will, he insists, eliminate profit and property, and ‘pauperism, luxury, oppression, vice, crime and hunger will disappear from our midst’” (140).

    For Proudhon, massive property accumulation of large firms and accompanying state collusion distorts these market exchanges. Under the prevailing system, he asserts in The Philosophy of Poverty, “there is irregularity and dishonesty in exchange” ([1847] 2012, 124) a problem exemplified by monopoly and its perversion of “all notions of commutative justice” (297). Monopoly permits unjust property extraction: Proudhon states in General Idea of the Revolution in the Nineteenth Century ([1851] 2003) that “the price of things is not proportionate to their VALUE: it is larger or smaller according to an influence which justice condemns, but the existing economic chaos excuses” (228). Exploitation becomes thereby a consequence of market disequilibria—the upward and downward deviations of price from value. It is a faulty market, warped by state intervention and too-powerful entrenched interests that is the cause of injustice. The Philosophy of Poverty details all manner of economic disaster caused by monopoly: “the interminable hours, disease, deformity, degradation, debasement, and all the signs of industrial slavery: all these calamities are born of monopoly” (290).

    As McNally’s (1993) work shows, blaming economic woes on “monopolists” and “middlemen” ran rife in popular critiques of political economy during the seventeenth and eighteenth centuries, leading many radicals to call for free trade as a solution to widespread poverty. Proudhon’s anarchism was part of this general tendency. In General Idea of the Revolution in the Nineteenth Century ([1851] 2003), he railed against “middlemen, commission dealers, promoters, capitalists, etc., who, in the old order of things, stand in the way of producer and consumer” (90). The exploiters worked by obstructing and manipulating the exchange of goods and services on the market.

    Proudhon’s particular view of economic injustice begets its own version of how best to change it. His revolutionary vision centers on the end of monopolies and currency reform, two ways that “monopolists” intervened in the smooth functioning of the market. He remained dedicated to the belief that the ills of capitalism arose from the concentrations of ownership creating unjust political power that could further distort the functioning of the market, and envisioned a market-based society where “political functions have been reduced to industrial functions, and that social order arises from nothing but transactions and exchanges” (1979, 11).

    Proudhon evinced a technological optimism that Marx would later criticize. From his petty producer standpoint, he believed technology would empower workers by overcoming the division of labor:

    Every machine may be defined as a summary of several operations, a simplification of powers, a condensation of labor, a reduction of costs. In all these respects machinery is the counterpart of division. Therefore through machinery will come a restoration of the parcellaire laborer, a decrease of toil for the workman, a fall in the price of his product, a movement in the relation of values, progress towards new discoveries, advancement of the general welfare. ([1847] 2012, 167)

    While Proudhon recognized some of the dynamics by which machinery could immiserate workers through deskilling and automating their work, he remained strongly skeptical of organized measures to ameliorate this condition. He rejected compensating the unemployed through taxation because it would “visit ostracism upon new inventions and establish communism by means of the bayonet” ([1847] 2012, 207); he also criticized employing out-of-work laborers in public works programs. Technological development should remain unregulated, leading to eventual positive outcomes: “The guarantee of our liberty lies in the progress of our torture” (209).

    Marx’s Critique of Proudhon

    Marx, after attempting to influence Proudhon, became one of his most vehement critics, attacking his rival’s arguments, both major and marginal. Marx had a very different understanding of the new industrial society of the nineteenth century. Marx ([1865] 2016) diagnosed his rival’s misrepresentations of capitalism as derived from a particular class basis. Proudhon’s theories emanated “from the standpoint and with the eyes of a French small-holding peasant (later petit bourgeois)” rather than the proletarian, who possesses nothing but labor-power, which must be exchanged for a wage from the capitalist.

    Since small producers own their own tools and depend largely on their own labor, they do not perceive any conflict between ownership of the means of production and labor: analysis from this standpoint, such as Proudhon’s, tends to collapse these categories together. Marx’s theorization of capitalism centered an emergent class of industrial proletarians, who, unlike small producers, owned nothing but their ability to sell their labor-power for a wage. Without any other means of survival, the proletarian could not experience the “labor market” as a meeting of equals coming to a mutually beneficial exchange of commodities, but as an abstraction from the concrete truth that working for whatever wage offered was compulsory, rather than a voluntary contract. Further, it was this very market for labor-power that, in the guise of equal exchange of commodities, helped to obscure that capitalist profit depended on extracting value from workers beyond what their wages compensated. This surplus value emerged in the production process, not, as Proudhon argued, at a later point where the goods produced were bought and sold. Without a conception of a contradiction between ownership and labor, the petty producer standpoint cannot see exploitation occurring in production.

    Instead, Proudhon saw exploitation occurring after production, during exchanges on the market distorted by unfair monopolies held intact through state intervention, with which petty producers could not compete. However, Marx ([1867] 1992) demonstrated that “monopolies” were simply the outcome of the concentration of capital due to competition: in his memorable wording from Capital, “One capitalist always strikes down many others” (929). As producers compete and more and more producers fail and are proletarianized, capital is held in fewer and fewer hands. In other words, monopolies are a feature, not a bug, of market economies.

    Proudhon’s misplaced emphasis on villainous monopolies is part of a greater error in diagnosing the momentous changes in the nineteenth-century economy: a neglect of the centrality of massive industrial-scale production to mature capitalism. In the first volume of Capital, Marx ([1867] 1992) argues that petty production was a historical phenomenon that would give way to capitalist production: “Private property which is personally earned, i.e., which is based, as it were, on the fusing together of the isolated, independent working individual with the conditions of his labour, is supplanted by capitalist private property, which rests on exploitation of alien, but formally free labour” (928). As producers compete and more and more producers fail and are proletarianized, capital—and with it, labor—concentrates.

    However, petty production persisted alongside industrial capitalism in ways that masked how the continued existence of the former relies on the latter. Under capitalism, labor, through commodification of labor-power through the wage relationship, is transformed from concrete acts of labor into labor in the abstract in the system of industrial production for exchange. This abstract labor, the basis of surplus value, is for Marx the “specific social form of labour” in capitalism (Murray 2016, 124). Without understanding abstract labor, Proudhon could not perceive how capitalism functioned as not simply a means of producing profit, but a system of structuring all labor in society.

    The importance of abstract labor to capitalism also meant that Proudhon’s plans to reform currency by making it worth labor-time would fail. As Marx ([1847] 1973) puts it in his book-length critique of Proudhon, “in large-scale industry, Peter is not free to fix for himself the time of his labor, for Peter’s labor is nothing without the co-operation of all the Peters and all the Pauls who make up the workshop” (77). In other words, because commodities under capitalism are manufactured through a complex division of labor, with different workers exercising differing levels of labor productivity, it is impossible to apportion specific quantities of time to specific labors on individual commodities. Without an understanding of the role of abstract labor to capitalist production, Proudhon could simply not grapple with the actual mechanisms of capitalism’s structuring of labor in society, and so, could not develop plans to overcome it. This overcoming could only occur through a political intervention that sought to organize production from the point of view of its socialization, not, as Proudhon believed, reforming elements of the exchange system to preserve individual producers.

    The Roots of Digital Proudhonism

    Many of Proudhon’s arguments were revived among digital radicals and reformers during the battles over copyright precipitated by networked digital technologies during the 1990s, of which Napster is the exemplary case. The techno-optimistic belief that the Internet would provide radical democratic change in cultural production took on a highly Proudhonian cast. The internet would “empower creators” by eliminating “middlemen” and “gatekeepers” such as record labels and distributors, who were the ultimate source of exploitation, and allowing exchange to happen on a “peer-to-peer” basis. By subverting the “monopoly” granted by copyright protections, radical change would happen on the basis of increased potential for voluntary market exchange, not political or social revolution.

    Siva Vaidhyanathan’s Anarchist in the Library (2005) is a representative example of this argument, and made with explicit appeals to anarchist philosophy. According to Vaidhyanathan, “the new [peer-to-peer] technology evades the professional gatekeepers, flattening the production and distribution pyramid…. Digitization and networking have democratized the production of music” (48). This democratization by peer-to-peer distribution threatens “oligarchic forces such as global entertainment conglomerates” even as it works to “empower artists in new ways and connect communities of fans” (102).

    The seeds of Digital Proudhonism were planted earlier than Napster, derived from the beliefs and practices of the Free Software movement. Threatened by intellectual property protections that signaled the corporatization of software development, the academics and amateurs of the Free Software movement developed alternative licenses that would keep software code “open” and thus able to share and build upon by any interested coder. This successfully protected the autonomous and collaborative working practices of the group. The movement’s major success was the Linux operating system, collaboratively built by a distributed team of mostly voluntary programmers who created a free alternative to the proprietary systems of Microsoft and Apple.

    Linux indicated to those examining the front lines of technological development that, far from just a software development model, Free Software could actually be an alternative mode of production, and even a harbinger of democratic revolution. The triumph of an unpaid network-based community of programmers creating a free and open product in the face of the IP-dependent monopoly like Microsoft seemed to realize one of Marx’s ([1859] 1911) technologically determinist prophecies from A Contribution to the Critique of Political Economy:

    At a certain stage of their development, the material forces of production in society come into conflict with the existing relations of production or—what is but a legal expression of the same thing—with the property relations within which they had been at work before. From forms of development of the forces of production these relations turn into their fetters. Then comes the era of social revolution. (12)

    The Free Software movement provoked a wave of political initiatives and accompanying theorizations of a new digital economy based on what Yochai Benkler (2006) called “commons-based peer production.” With networked personal computers so widely distributed, “[t]he material requirements for effective information production and communication are now owned by numbers of individuals several orders of magnitude larger than the number of owners of the basic means of information production and exchange a mere two decades ago” (4). Suddenly, and almost as if by accident, the means of production were in the hands, not of corporations or states, but of individuals: a perfect encapsulation of the petty producer economy.

    The classification of file sharing technologies such as Napster as “peer-to-peer” solidified this view. Napster’s design allowed users to exchange MP3 files by linking “peers” to one another, without storing files on Napster’s own servers. This performed two useful functions. It dispersed the server load for hosting and exchanging files among the computers and connections of Napster’s user base, alleviating what would have been massive bandwidth expenses. It also provided Napster with a defense against charges of infringement, as its own servers were not involved in copying files. This design might offer it protection from the charges that had doomed the site MP3.com, which had hosted user files.

    While Napster’s suggestion that corporate structures for the distribution of culture could be supplanted by a voluntary federation of “peers” was important, it was ultimately a mystification. Not only did the courts find Napster liable for facilitating infringement, but the flat, “decentralized” topology of Napster still relied on the company’s central listing service to connect peers. Yet the ideological impact was profound. A law review article by Raymond Ku (2002), the then-director of the Institute of Law, Science & Technology, Seton Hall University School of Law is illustrative of both the nature of the arguments and how widespread and respectable they became in the post-Napster era: “the argument for copyright is primarily an argument for protecting content distributors in a world in which middlemen are obsolete. Copyright is no longer needed to encourage distribution because consumers themselves build and fund the distribution channels for digital content” (263). Clay Shirky’s (2008) paeans to “the mass amateurization of efforts previously reserved for media professionals” sound a similar note (55), presenting a technologically functionalist explanation for the existence of “gatekeeper” media industries: “It used to be hard to move words, images, and sounds from creator to consumer… The commercial viability of most media businesses involves providing those solutions, so preservation of the original problems became an economic imperative. Now, though, the problems of production, reproduction, and distribution are much less serious” (59). This narrative has remained persistent years after the brief flourishing of Napster: “the rise of peer-to-peer distribution systems… make middlemen hard to identify, if not cutting them out of the process altogether” (Kernfeld 2011, 217).

    This situation was given an emancipatory political valence by intellectuals associated with copyright reform. Eager to protect an emerging sector of cultural production founded on sampling, remixing and file sharing, they described the accumulation of digital information and media online as a “commons,” which could be treated in an alternative way from forms of private property. Due to the lack of rivalry among digital goods (Benkler 2006, 36), users do not deplete the common stock, and so should benefit from a laxer approach to property rights. Law professor Lawrence Lessig (2004) started an initiative, Creative Commons, dedicated to establishing new licenses that would “build a layer of reasonable copyright on top of the extremes that now reign” (282). Part of Lessig’s argument for Creative Commons classifies media production and distribution, such as making music videos or mashups, as a “form of speech.” Therefore, copyright acted as unjust government regulation, and so must be resisted. “It is always a bad deal for the government to get into the business of regulating speech markets,” Lessig argues, even going so far as to raise the specter of communist authoritarianism: “It is the Soviet Union under Brezhnev” (128). Here Lessig performs a delicate rhetorical sleight of hand: the positioning cultural production as speech, it reifies a vision of such production as emanating from a solitary, individual producer who must remain unencumbered when bringing that speech to market.

    Cory Doctorow (2014), a poster child of achievement in the new peer-to-peer world (in Free Culture, Lessig boasts of Doctorow’s successful promotional strategy of giving away electronic copies of his books for free), argues from a pro-market position against middlemen in his latest book: “copyright exists to protect middlemen, retailers, and distributors from being out-negotiated by creators and their investors” (48). While the argument remains the same, some targets have shifted: “investors” are “publishers, studios, record labels” while “intermediaries” are the platforms of distribution: “a distributor, a website like YouTube, a retailer, an e-commerce site like Amazon, a cinema owner, a cable operator, a TV station or network” (27).

    While the thrust of these critiques of copyright focus on egregious overreach by the culture industries and their assault upon all manner of benign noncommercial activity, they also reveal a vision of an alternative cultural economy of independent producers who, while not necessarily anti-capitalist, can escape the clutches of massive centralized corporations through networked digital technologies. This facilitates both economic and political freedom via independence from control and regulation, and maximum opportunities on the market. “By giving artists the tools and technologies to take charge of their own production, marketing, and distribution, digitization underscored the disequilibrium of traditional record contracts and offered what for many is a preferable alternative” (Sinnreich 2013, 124). As it so often does, the fusion of ownership and labor characteristic of the petty producer standpoint, the structure of feeling of the independent artisan, articulates itself through the mantra of “Do It Yourself.”

    These analyses and polemics reproduce the Proudhonist vision of an alternative to existing digital capitalism. Individual independent creators will achieve political autonomy and economic benefit through the embrace digital network technologies, as long as these creators are allowed to compete fairly with incumbents. Rather than insist on collective regulation of production, Digital Proudhonism seeks forms of deregulation, such as copyright reform, that will chip away at the existence of “monopoly” power of existing media corporations that fetters the market chances of these digital artisans.

    Digital Proudhonism Today

    Rooted in emergent digital methods of cultural production, the first wave of Digital Proudhonism shored up its petty producer standpoint through a rhetoric that centered the figure of the artist or “creator.” The contemporary term is the more expansive “the creative,” which lionizes a larger share of knowledge workers of the digital economy. As Sarah Brouillette (2009) notes, thinkers from management gurus such as Richard Florida to radical autonomist Marxist theorists such as Paolo Virno “broadly agree that over the past few decades more work has become comparable to artists’ work.” As a kind of practical consciousness, Digital Proudhonism easily spreads through the channels of the so-called “creative class,” its politics and worldview traveling under a host of other endeavors. These initiatives self-consciously seek to realize the ideals of Proudhonism in fields beyond the confines of music and film, with impact in manufacturing, social organization, and finance.

    The maker movement is one prominent translation of Digital Proudhonism into a challenge to the contemporary organization of production, with allegedly radical effects on politics and economics. With the advent of new production technologies, such as 3D printers and digital design tools, “makers” can take the democratizing promise of the digital commons into the physical world. Just as digital technology supposedly distributes the means of production of culture across a wider segment of the population, so too will it spread manufacturing blueprints, blowing apart the restrictions of patents the same way Napster tore copyright asunder. “The process of making physical stuff has started to look more like the process of making digital stuff,” claims Chris Anderson (2012), author of Makers: The New Industrial Revolution (25). This has a radical effect: a realization of the goals of socialism via the unfolding of technology and the granting of access. “If Karl Marx were here today, his jaw would be on the floor. Talk about ‘controlling the tools of production’: you (you!) can now set factories into motion with a mouse click” (26). The key to this revolution is the ability of open-source methods to lower costs, thereby fusing the roles of inventor and entrepreneur (27).

    Anderson’s “new industrial revolution” is one of a distinctly Proudhonian cast. Digital design tools are “extending manufacturing to a hugely expanded population of producers—the existing manufacturers plus a lot of regular folk who are becoming entrepreneurs” (41). The analogy to the rise of remix culture and amateur production lionized by Lessig is deliberate: “Sound familiar? It’s exactly what happened with the Web” (41). Anderson envisions the maker movement to be akin to the nineteenth century petty producers represented by Proudhon’s views: Cottage industries “were closer to what a Maker-driven New Industrial Revolution might be than are the big factories we normally associate with manufacturing” (49). Anderson’s preference for the small producer over the large factory echoes Proudhon. The subject of this revolution is not the proletarian at work in the large factory, but the artisan who owns their own tools.

    A more explicitly radical perspective comes from the avowedly Proudhonist Center for a Stateless Society (C4SS), a “left market anarchist think tank and media center” deeply conversant in libertarian and so-called anarcho-capitalist economic theory. As with Anderson, C4SS subscribes to the techno-utopian potentials for a new arrangement of production driven by digital technology, which has the potential to reduce prices on goods, making them within the reach of anyone (once again, music piracy is held up as a precursor). However, this potential has not been realized because “economic ruling classes are able to enclose the increased efficiencies from new technology as a source of rents mainly through artificial scarcities, artificial property rights, and entry barriers enforced by the state” (Carson 2015a). Monopolies, enforced by the state, have “artificially” distorted free market transactions.

    These monopolies, in the form of intellectual property rights, are preventing a proper Proudhonian revolution in which everyone would control their own individual production process. “The main source of continued corporate control of the production process is all those artificial property rights such as patents, trademarks, and business licenses, that give corporations a monopoly on the conditions under which the new technologies can be used” (Carson 2015a). However, once these artificial monopolies are removed, corporations will lose their power and we can have a world of “small neighborhood cooperative shops manufacturing for local barter-exchange networks in return for the output of other shops, of home microbakeries and microbreweries, surplus garden produce, babysitting and barbering, and the like” (Carson 2015a).

    This revolution is a quiet one, requiring no strikes or other confrontations with capitalists. Instead, the answer is to create this new economy within the larger one, and hollow it out from the inside:

    Seizing an old-style factory and holding it against the forces of the capitalist state is a lot harder than producing knockoffs in a garage factory serving the members of a neighborhood credit-clearing network, or manufacturing open-source spare parts to keep appliances running. As the scale of production shifts from dozens of giant factories owned by three or four manufacturing firms, to hundreds of thousands of independent neighborhood garage factories, patent law will become unenforceable. (Carson 2015b)

    As Marx pointed out long ago, such petty producer fantasies of individually owned and operated manufacturing ironically rely upon the massive amounts of surplus generated from proletarians working in large-scale factories. The devices and infrastructures of the internet itself, as described by Nick Dyer-Witheford (2015) in his appropriately titled Cyber-Proletariat, are an obvious example. But proletarian labor also appears in the Digital Proudhonists’ own utopian fantasies. Anderson, describing the change in innovation wrought by the internet, describes how his grandfather’s invention of a sprinkler system would have gone differently. “When it came time to make more than a handful of his designs, he wouldn’t have begged some manufacturer to license his ideas, he would have done it himself. He would have uploaded his design files to companies that could make anything from tens to tens of thousands for him, even drop-shipping them directly to customers” (15).  These “companies” of course are staffed by workers very different from “makers,” who work in facilities of mass production. Their labor is obscured by an influential ideology of artisans who believe themselves reliant on nothing but a personal computer and their own creativity.

    A recent Guardian column by Paul Mason, anti-capitalist journalist and author of the techno-optimistic Postcapitalism serves as a further example. Mason (2016) argues, similarly to the C4SS, that intellectual property is the glue holding together massive corporations, and the key to their power over production. Simply by giving up on patents, as recommended by Anderson, Proudhonists will outflank capitalism on the market. His example is the “revolutionary” business model of the craft brewery chain BrewDog, who “open-sourced its recipe collection” by releasing the information publicly, unlike its larger corporate competitors. For Mason, this is an astonishing act of economic democracy: armed with BrewDog’s recipes, “All you would need to convert them from homebrew approximations to the actual stuff is a factory, a skilled workforce, some raw materials and a sheaf of legal certifications.” In other words, all that is needed to achieve postcapitalism is capitalism precisely as Marx described it.

    The pirate fantasies of subverting monopolies extend beyond the initiatives of makers. The Digital Proudhonist belief in revolutionary change rooted in individual control of production and exchange on markets liberated from incumbents such as corporations and the state drives much of the innovation on the margins of tech. A recent treatise on the digital currency Bitcoin lauds Napster’s ability to “cut out the middlemen,” likening the currency to the file sharing technology (Kelly 2014, 11). “It is a quantum leap in the peer-to-peer network phenomenon. Bitcoin is to value transfer what Napster was to music” (33). Much like the advocates of digital currencies, Proudhon believed that state control of money was an unfair manipulation of the market, and sought to develop alternative currencies and banks rooted in labor-time, a belief that Marx criticized for its misunderstanding of the role of abstract labor in production.

    In this way, Proudhon and his beliefs fit naturally into the dominant ideologies surrounding Bitcoin and other cryptocurrencies: that economic problems stem from the conspiratorial manipulation of “fiat” currency by national governments and financial organizations such as the Federal Reserve. In light of recent analyses that suggest that Bitcoin functions less as a means of exchange than as a sociotechnical formation to which an array of faulty right-wing beliefs about economics adheres (Golumbia 2016), and the revelation that contemporary fascist groups rely on Bitcoin and other cryptocurrency to fund their activities (Ebner 2018), it is clear that Digital Proudhonism exists comfortably beside the most reactionary ideologies. Historically, this was true of Proudhon’s own work as well. As Zeev Sternhell (1996) describes, the early twentieth-century French political organization the Cercle Proudhon were captivated by Proudhon’s opposition to Marxism, his distaste for democracy, and his anti-Semitism. According to Sternhell, the group was an influential source of French proto-fascist thought.

    Alternatives

    The goal of this paper is not to question the creativity of remix culture or the maker movement, or to indict their potentials for artistic expression, or negate all their criticisms of intellectual property. What I wish to criticize is the outsized economic and political claims made about it. These claims have an impact on policy, such as Obama’s “Nation of Makers” initiative (The White House Office of the Press Secretary 2016), which draws upon numerous federal agencies, hundreds of schools, as well as educational product companies to spark “a renaissance of American manufacturing and hardware innovation.” But further, like Marx, I not only think Proudhonism rests on incorrect analyses of cultural labor, but that such ideas lead to bad politics. As Astra Taylor (2014) extensively documents in The People’s Platform, for all the exclamations of new opportunities with the end of middlemen and gatekeepers, the creative economy is as difficult as it ever was for artists to navigate, noting that writers like Lessig have replaced the critique of the commodification of culture with arguments about state and corporate control (26-7).  Meanwhile, many of the fruits of this disintermediation have been plucked by an exploitative “sharing economy” whose platforms use “peer-to-peer” to subvert all manner of regulations; at least one commentator has invoked Napster’s storied ability to “cut out the middlemen” to describe AirBnB and Uber (Karabel 2014).

    Digital Proudhonism and its vision of federations of independent individual producers and creators (perhaps now augmented with the latest cryptographic tools) dominates the imagination of a radical challenge to digital capitalism. Its critiques of the corporate internet have become common sense. What kind of alternative radical vision is possible? Here I believe it is useful to return to the core of Marx’s critique of Proudhon.

    Marx saw that the unromantic labor of proletarians, combining varying levels of individual productivity within the factory through machines which themselves are the product of social labor, capitalism’s dynamics create a historically novel form of production—social production—along with new forms of culture and social relations. For Marx ([1867] 1992), this was potentially the basis for an economy beyond capitalism. To attempt to move “back” to individual production was reactionary: “As soon as the workers are turned into proletarians, and their means of labour into capital, as soon as the capitalist mode of production stands on its own feet, then the further socialization of labour and further transformation of the soil and other means of production into socially exploited and, therefore, communal means of production takes on a new form” (928).

    The socialization of production under the development of the means of production—the necessity of greater collaboration and the reliance on past labors in the form of machines—gives way to a radical redefinition of the relationship to one’s output. No one can claim a product was made by them alone; rather, production demands to be recognized as social. Describing the socialization of labor through industrialization in Socialism: Utopian and Scientific, Engels ([1880] 2008) states, “The yarn, the cloth, the metal articles that now came out of the factory were the joint product of many workers, through whose hands they had successively to pass before they were ready. No one person could say of them: ‘I made that; this is my product’” (56). To put it in the language of cultural production, there can be no author. Or, in another implicit recognition that the work of today relies on the work of many others, past and present: everything is a remix.

    Or instead of a remix, a “vortex,” to use the language of Nick Dyer-Witheford (2015), whose Cyber-Proletariat reminds us that the often-romanticized labor of digital creators and makers is but one stratum among many that makes up digital culture. The creative economy is a relatively privileged sector in an immense global “factory” made up of layers of formal and informal workers operating at the point of production, distribution and consumption, from tantalum mining to device manufacture to call center work to app development. The romance of “DIY” obscures the reality that nothing digital is done by oneself: it is always already a component of a larger formation of socialized labor.

    The labor of digital creatives and innovators, sutured as it is to a technical apparatus fashioned from dead labor and meant for producing commodities for profit, is therefore already socialized. While some of this socialization is apparent in peer production, much of it is mystified through the real abstraction of commodity fetishism, which masks socialization under wage relations and contracts. Rather than further rely on these contracts to better benefit digital artisans, a Marxist politics of digital culture would begin from the fact of socialization, and as Radhika Desai (2011) argues, take seriously Marx’s call for “a general organization of labour in society” via political organizations such as unions and labor parties (212). Creative workers could align with others in the production chain as a class of laborers rather than as an assortment of individual producers, and form the kinds of organizations, such as unions, that have been the vehicles of class politics, with the aim of controlling society’s means of production, not simply one’s “own” tools or products. These would be bonds of solidarity, not bonds of market transactions. Then the apparatus of digital cultural production might be controlled democratically, rather than by the despotism of markets and private profit.

    _____

    Gavin Mueller Gavin Mueller holds a PhD in Cultural Studies from George Mason University. He teaches in the New Media and Digital Culture program at the University of Amsterdam.

    Back to the essay

    _____

    Notes

    [1] The Pirate Bay, the largest and most antagonistic site of the peer-to-peer movement, has founders who identified as libertarian, socialist, and apolitical, respectively, and acquired funding from Carl Lundström, an entrepreneur associated with far-right movements (Schwartz 2014, 142).

    _____

    Works Cited

    • Anderson, Chris. 2012. Makers: The New Industrial Revolution. New York: Crown Business.
    • Barbrook, Richard and Andy Cameron. 1996. “The Californian Ideology.” Science as Culture 6:1. 44-72.
    • Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace.” Electronic Frontier Foundation.
    • Benkler, Yochai. 2006. The Wealth of Networks. New Haven, CT: Yale University Press.
    • Boyle, James. 2008. Public Domain: Enclosing the Commons of the Mind. New Haven, CT: Yale University Press.
    • Brouillette, Sarah. 2009. “Creative Labor.” Mediations: Journal of the Marxist Literary Group 24:2. 140-149.
    • Carson, Kevin. 2015a. “Nothing to Fear from New Technologies if the Market is Free.” Center for a Stateless Society.
    • Carson, Kevin. 2015b. “Paul Mason and His Critics (Such As They Are).” Center for a Stateless Society.
    • Desai, Radhika. 2011. “The New Communists of the Commons: Twenty-First-Century Proudhonists.” International Critical Thought 1:2. 204-223.
    • Doctorow, Cory. 2014. Information Doesn’t Want to Be Free: Laws for the Internet Age. San Francisco: McSweeney’s.
    • Dyer-Witheford, Nick. 2015. Cyber-proletariat: Global Labour in the Digital Vortex. London: Pluto Press.
    • Ebner, Julia, 2018. “The Currency of the Far-Right: Why Neo-Nazis Love Bitcoin.” The Guardian (Jan 24).
    • Engels, Friedrich. (1880) 2008. Socialism: Utopian and Scientific. Trans. Edward Aveling. New York: Cosimo Books, 2008.
    • Golumbia, David. 2016. The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis: University of Minnesota Press.
    • Karabel, Zachary. 2014. “Requiem for the Middleman.” Slate (Apr 25).
    • Kelly, Brian. 2014. The Bitcoin Big Bang: How Alternative Currencies Are About to Change the World. Hoboken: Wiley.
    • Kernfeld, Barry. 2011. Pop Song Piracy: Disobedient Music Distribution Since 1929. Chicago: University of Chicago Press.
    • Ku, Raymond Shih Ray. 2002. “The Creative Destruction of Copyright: Napster and the New Economics of Digital Technology.” The University of Chicago Law Review 69, no. 1: 263-324.
    • Lessig, Lawrence. 2004. Free Culture: The Nature and Future of Creativity. New York: Penguin Books.
    • Lessig, Lawrence. 2008. Remix: Making Art and Commerce Thrive in the New Economy. New York: Penguin.
    • Marx, Karl. (1847) 1973. The Poverty of Philosophy. New York: International Publishers.
    • Marx, Karl. (1859) 1911. A Contribution to the Critique of Political Economy. Translated by N.I. Stone. Chicago: Charles H. Kerr and Co.
    • Marx, Karl. (1865) 2016. “On Proudhon.” Marxists Internet Archive.
    • Marx, Karl. (1867) 1992. Capital: A Critique of Political Economy, Volume 1. Trans. Ben Fowkes. London: Penguin Books.
    • Mason, Paul. 2016, “BrewDog’s Open-Source Revolution is at the Vanguard of Postcapitalism.” The Guardian (Feb 29).
    • McLeod, Kembrew. 2005. Freedom of Expression: Overzealous Copyright Bozos and Other Enemies of Creativity. New York: Doubleday.
    • McNally, David. 1993. Against the Market: Political Economy, Market Socialism and the Marxist Critique. London: Verso.
    • Murray, Patrick. 2016. The Mismeasure of Wealth: Essays on Marx and Social Form. Leiden: Brill.
    • The White House Office of the Press Secretary. 2016. “New Commitments in Support of the President’s Nation of Makers Initiative to Kick Off 2016 National Week of Making.” June 17.
    • Proudhon, Pierre-Joseph. (1840) 2011. “What is Property.” In Property is Theft! A Pierre-Joseph Proudhon Reader, edited by Iain McKay. Translated by Benjamin R. Tucker. Edinburgh: AK Press.
    • Proudhon, Pierre-Joseph. (1847) 2012. The Philosophy of Poverty: The System of Economic Contradictions. Translated by Benjamin R. Tucker. Floating Press.
    • Proudhon, Pierre-Joseph. (1851) 2003. General Idea of the Revolution in the Nineteenth Century. Translated by John Beverly Robinson. Mineola, NY: Dover Publications, Inc.
    • Proudhon, Pierre-Joseph. (1863) 1979. The Principle of Federation. Translated by Richard Jordan. Toronto: University of Toronto Press.
    • Schwartz, Jonas Andersson. 2014. Online File Sharing: Innovations in Media Consumption. New York: Routledge.
    • Sinnreich, Aram. 2013. The Piracy Crusade: How the Music Industry’s War on Sharing Destroys Markets and Erodes Civil Liberties. Amherst, MA: University of Massachusetts Press.
    • Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin.
    • Sternhell, Zeev. 1996. Neither Right Nor Left: Fascist Ideology in France. Princeton, NJ: Princeton University Press.
    • Taylor, Astra. 2014. The People’s Platform: Taking Back Power and Culture in a Digital Age. New York: Metropolitan Books.
    • Vaidhyanathan, Siva. 2005. The Anarchist in the Library: How the Clash Between Freedom and Control is Hacking the Real World and Crashing the System. New York: Basic Books.
    • Williams, Raymond. 1977. Marxism and Literature. Oxford: Oxford University Press.
    • Winner, Langdon. 1997. “Cyberlibertarian Myths and The Prospects For Community.” Computers and Society 27:3. 14 – 19.

     

  • Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    Jürgen Geuter — Liberty, an iPhone, and the Refusal to Think Politically

    By Jürgen Geuter
    ~

    The relationship of government and governed has always been complicated. Questions of power, legitimacy, structural and institutional violence, of rights and rules and restrictions keep evading any ultimate solution, chaining societies to constant struggles about shifting balances between different positions and extremes or defining completely new aspects or perspectives on them to shake off the often perceived stalemate. Politics.

    Politics is a simple word but one with a lot of history. Coming from the ancient Greek term for “city” (as in city-state) the word pretty much shows what it is about: Establishing the structures that a community can thrive on. Policy is infrastructure. Not made of wire or asphalt but of ideas and ways of connecting them while giving the structure ways of enforcing the integrity of itself.

    But while the processes of negotiation and discourse that define politics will never stop while intelligent beings exist recent years have seen the emergence of technology as a replacement of politics. From Lawrence Lessig’s “Code is Law” to Marc Andreessen’s “Software Is Eating the World”: A small elite of people building the tools and technologies that we use to run our lives have in a way started emancipating from politics as an idea. Because where politics – especially in democratic societies – involves potentially more people than just a small elite, technologism and its high priests pull off a fascinating trick: defining policy and politics while claiming not to be political.

    This is useful for a bunch of reasons. It allows to effectively sidestep certain existing institutions and structures avoiding friction and loss of forward momentum. “Move fast and break things” was Facebook’s internal motto until only very recently. It also makes it easy to shed certain responsibilities that we expect political entities of power to fulfill. Claiming “not to be political” allows you to have mobs of people hunting others on your service without really having to do anything about it until it becomes a PR problem. Finally, evading the label of politics grants a lot more freedoms when it comes to wielding powers that the political structures have given you: It’s no coincidence that many Internet platform declare “free speech” a fundamental and absolute right, a necessary truth of the universe, unless it’s about showing a woman breastfeeding or talking about the abuse free speech extremists have thrown at feminists.

    Yesterday news about a very interesting case directly at the contact point of politics and technologism hit mainstream media: Apple refused – in a big and well-written open letter to its customers – to fulfill an order by the District Court of California to help the FBI unlock an iPhone 5c that belonged to one of the shooters in last year’s San Bernadino shooting, in which 14 people were killed and 22 more were injured.

    Apple’s argument is simple and ticks all the boxes of established technical truths about cryptography: Apple’s CEO Tim Cook points out that adding a back door to its iPhones would endanger all of Apple’s customers because nobody can make sure that such a back door would only be used by law enforcement. Some hacker could find that hole and use it to steal information such as pictures, credit card details or personal data from people’s iPhones or make these little pocket computers do illegal things. The dangers Apple correctly outlines are immense. The beautifully crafted letter ends with the following statements:

    Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government.

    We are challenging the FBI’s demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications.

    While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

    Nothing in that defense is new: The debate about government backdoors has been going on for decades with companies, software makers and government officials basically exchanging the same bullets points every few years. Government: “We need access. For security.” Software people: “Yeah but then nobody’s system is secure anymore.” Rinse and repeat. That whole debate hasn’t even changed through Edward Snowden’s leaks: While the positions were presented in an increasingly shriller and shriller tone the positions themselves stayed monolithic and unmoved. Two unmovable objects yelling at each other to get out of the way.

    Apple’s open letter was received with high praise all through the tech-savvy elites, from the cypherpunks to journalists and technologists. One tweet really stood out for me because it illustrates a lot of what we have so far talked about:

    Read that again. Tim Cook/Apple are clearly separated from politics and politicians when it comes to – and here’s the kicker – the political concept of individual liberty. A deeply political debate, the one about where the limits of individual liberty might be is ripped out of the realm of politicians (and us, but we’ll come to that later). Sing the praises of the new Guardian of the Digital Universe.

    But is the court order really exactly the fundamental danger for everybody’s individual liberty that Apple presents? The actual text paints a different picture. The court orders Apple to help the FBI access one specific, identified iPhone. The court order lists the actual serial number of the device. What “help” means in this context is also specified in great detail:

    1. Apple is supposed to disable features of the iPhone automatically deleting all user data stored on the device which are usually in place to prevent device thieves from accessing the data the owners of the device stored on it.
    2. Apple will also give the FBI some way to send passcodes (guesses of the PIN that was used to lock the phone) to the device. This sounds strange but will make sense later.
    3. Apple will disable all software features that introduce delays for entering more passcodes. You know the drill: You type the wrong passcode and the device just waits for a few seconds before you can try a new one.

    Apple is compelled to write a little piece of software that runs only on the specified iPhone (the text is very clear on that) and that disables the 2 security features explained in 1 and 3. Because the court actually recognizes the dangers of having that kind of software in the wild it explicitly allows Apple to do all of this within its own facilities: the Phone would be sent to an Apple facility, the software loaded to the RAM of the device. This is where 2 comes in: When the device has been modified by loading the Apple-signed software into its RAM the FBI needs a way to send PIN code guesses to the device. The court order even explicitly states that Apple’s new software package is only supposed to go to RAM and not change the device in other ways. Potentially dangerous software would never leave Apple’s premises, Apple also doesn’t have to introduce or weaken the security of all its devices and if Apple can fulfill the tasks described in some other way the court is totally fine with it. The government, any government doesn’t get a generic backdoor to all iPhones or all Apple products. In a more technical article than this on Dan Guido outlines that what the court order asks for would work on the iPhone in question but not on most newer ones.

    So while Apple’s PR evokes the threat of big government’s boots marching on to step on everybody’s individual freedoms, the text of the court order and the technical facts make the case ultra specific: Apple isn’t supposed to build a back door for iPhones but help law enforcement to open up one specific phone within their possession connected not to a theoretical crime in the future but the actual murder of 14 people.

    We could just attribute it all to Apple effectively taking a PR opportunity to strengthen the image it has been developing after realizing that they just couldn’t really do data and services, the image of the protector of privacy and liberty. An image that they kicked into overdrive post-Snowden. But that would be too simple because the questions here are a lot more fundamental.

    How do we – as globally networked individuals living in digitally connected and mutually overlaying societies – define the relationship of transnational corporations and the rules and laws we created?

    Cause here’s the fact: Apple was ordered by a democratically legitimate court to help in the investigation of a horrible, capital crime leading to the murder of 14 people by giving it a way to potentially access one specific phone of the more than 700 million phones Apple has made. And Apple refuses.

    Which – don’t get me wrong – is their right as an entity in the political system of the US: They can fight the court order using the law. They can also just refuse and see what the government, what law enforcement will do to make them comply. Sometimes the cost of breaking that kind of resistance overshadow the potential value so the request gets dropped. But where do we as individuals stand whose liberty is supposedly at stake? Where is our voice?

    One of the main functions of political systems is generating legitimacy for power. While some less-than-desirable systems might generate legitimacy by being the strongest, in modern times less physical legitimizations of power were established: a king for example often is supposed to rule because one or more god(s) say so. Which generates legitimacy especially if you share the same belief. In democracies legitimacy is generated by elections or votes: by giving people the right to speak their mind, elect representatives and be elected the power (and structural violence) that a government exerts is supposedly legitimized.

    Some people dispute the legitimacy of even democratically distributed power, and it’s not like they have no point, but let’s not dive into the teachings of Anarchism here. The more mainstream position is that there is a rule of law and that the institutions of the United States as a democracy are legitimized as the representation of US citizens. They represent every US citizen, they each are supposed to keep the political structure, the laws and rules and rights that come with being a US citizen (or living there) intact. And when that system speaks to a company it’s supposed to govern and the company just gives it the finger (but in a really nice letter) how does the public react? They celebrate.

    But what’s to celebrate? This is not some clandestine spy network gathering everybody’s every waking move to calculate who might commit a crime in 10 years and assassinate them. This is a concrete case, a request confirmed by a court in complete accordance with the existing practices in many other domains. If somebody runs around and kills people, the police can look into their mail, enter their home. That doesn’t abolish the protections of the integrity of your mail or home but it’s an attempt to balance the rights and liberties of the individual as well as the rights and needs of all others and the social system they form.

    Rights hardly ever are absolute, some might even argue that no right whatsoever is absolute: you have the right to move around freely. But I can still lock you out of my home and given certain crimes you might be locked up in prison. You have the right to express yourself but when you start threatening others, limits kick in. This balancing act that I also started this essay with has been going on publicly for ages and it will go on for a lot longer. Because the world changes. New needs might emerge, technology might create whole new domains of life that force us to rethink how we interact and which restrictions we apply. But that’s nothing that one company just decides.

    In unconditionally celebrating Cook’s letter a dangerous “apolitical” understanding of politics shows its ugly face: An ideology so obsessed with individual liberty that it happily embraces its new unelected overlords. Code is Law? More like “Cook is Law”.

    This isn’t saying that Apple (or any other company in that situation) just has to automatically do everything a government tells them to. It’s quite obvious that many of the big tech companies are not happy about the idea of establishing precedent in helping government authorities. Today it’s the FBI but what if some agency from some dictatorship wants the data from some dissident’s phone? Is a company just supposed to pick and choose?

    The world might not grow closer together but it gets connected a lot more and that leads to inconsistent laws, regulations, political ideologies etc colliding. And so far we as mankind have no idea how to deal with it. Facebook gets criticized in Europe for applying very puritanic standards when it comes to nudity but it does follow as a US company established US traditions. Should they apply German traditions which are a lot more open when it comes to depictions of nudity as well? What about rules of other countries? Does Facebook need to follow all? Some? If so which ones?

    While this creates tough problems for international law makers, governments and us more mortal people, it does concern companies very little as they can – when push comes to shove – just move their base of operation somewhere else. Which they already do to “optimize” avoid taxes, about which Cook also recently expressed indignant refusal to comply with US government requirements as “total political crap” – is this also a cause for all of us across the political spectrum to celebrate Apple’s protection of individual liberty? I wonder how the open letter would have looked if Ireland, which is a tax haven many technology companies love to use, would have asked for the same thing California did?

    This is not specifically about Apple. Or Facebook. Or Google. Or Volkswagen. Or Nestle. This is about all of them and all of us. If we uncritically accept that transnational corporations decide when and how to follow the rules we as societies established just because right now their (PR) interests and ours might superficially align how can we later criticize when the same companies don’t pay taxes or decide to not follow data protection laws? Especially as a kind of global digital society (albeit of a very small elite) we have between cat GIFs and shaking the fist at all the evil that governments do (and there’s lots of it) dropped the ball on forming reasonable and consistent models for how to integrate all our different inconsistent rules and laws. How we gain any sort of politically legitimized control over corporations, governments and other entities of power.

    Tim Cook’s letter starts with the following words:

    This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

    On that he and I completely agree.


    _____

    Jürgen Geuter (@tante) is a political computer scientist living in Germany. For about 10 years he has been speaking and writing about technology, digitalization, digital culture and the way these influence mainstream society. His writing has been featured in Der Spiegel, Wired Germany and other publications as well as his own blog Nodes in a Social Network, on which an earlier version of this post first appeared.

    Back to the essay

  • The Reticular Fallacy

    The Reticular Fallacy

    By Alexander R. Galloway
    ~

    We live in an age of heterogenous anarchism. Contingency is king. Fluidity and flux win over solidity and stasis. Becoming has replaced being. Rhizomes are better than trees. To be political today, one must laud horizontality. Anti-essentialism and anti-foundationalism are the order of the day. Call it “vulgar ’68-ism.” The principles of social upheaval, so associated with the new social movements in and around 1968, have succeed in becoming the very bedrock of society at the new millennium.

    But there’s a flaw in this narrative, or at least a part of the story that strategically remains untold. The “reticular fallacy” can be broken down into two key assumptions. The first is an assumption about the nature of sovereignty and power. The second is an assumption about history and historical change. Consider them both in turn.

    (1) First, under the reticular fallacy, sovereignty and power are defined in terms of verticality, centralization, essence, foundation, or rigid creeds of whatever kind (viz. dogma, be it sacred or secular). Thus the sovereign is the one who is centralized, who stands at the top of a vertical order of command, who rests on an essentialist ideology in order to retain command, who asserts, dogmatically, unchangeable facts about his own essence and the essence of nature. This is the model of kings and queens, but also egos and individuals. It is what Barthes means by author in his influential essay “Death of the Author,” or Foucault in his “What is an Author?” This is the model of the Prince, so often invoked in political theory, or the Father invoked in psycho-analytic theory. In Derrida, the model appears as logos, that is, the special way or order of word, speech, and reason. Likewise, arkhe: a term that means both beginning and command. The arkhe is the thing that begins, and in so doing issues an order or command to guide whatever issues from such a beginning. Or as Rancière so succinctly put it in his Hatred of Democracy, the arkhe is both “commandment and commencement.” These are some of the many aspects of sovereignty and power as defined in terms of verticality, centralization, essence, and foundation.

    (2) The second assumption of the reticular fallacy is that, given the elimination of such dogmatic verticality, there will follow an elimination of sovereignty as such. In other words, if the aforementioned sovereign power should crumble or fall, for whatever reason, the very nature of command and organization will also vanish. Under this second assumption, the structure of sovereignty and the structure of organization become coterminous, superimposed in such a way that the shape of organization assumes the identical shape of sovereignty. Sovereign power is vertical, hence organization is vertical; sovereign power is centralized, hence organization is centralized; sovereign power is essentialist, hence organization, and so on. Here we see the claims of, let’s call it, “naïve” anarchism (the non-arkhe, or non foundation), which assumes that repressive force lies in the hands of the bosses, the rulers, or the hierarchy per se, and thus after the elimination of such hierarchy, life will revert so a more direct form of social interaction. (I say this not to smear anarchism in general, and will often wish to defend a form of anarcho-syndicalism.) At the same time, consider the case of bourgeois liberalism, which asserts the rule of law and constitutional right as a way to mitigate the excesses of both royal fiat and popular caprice.

    reticular connective tissue
    source: imgkid.com

    We name this the “reticular” fallacy because, during the late Twentieth Century and accelerating at the turn of the millennium with new media technologies, the chief agent driving the kind of historical change described in the above two assumptions was the network or rhizome, the structure of horizontal distribution described so well in Deleuze and Guattari. The change is evident in many different corners of society and culture. Consider mass media: the uni-directional broadcast media of the 1920s or ’30s gradually gave way to multi-directional distributed media of the 1990s. Or consider the mode of production, and the shift from a Fordist model rooted in massification, centralization, and standardization, to a post-Fordist model reliant more on horizontality, distribution, and heterogeneous customization. Consider even the changes in theories of the subject, shifting as they have from a more essentialist model of the integral ego, however fraught by the volatility of the unconscious, to an anti-essentialist model of the distributed subject, be it postmodernism’s “schizophrenic” subject or the kind of networked brain described by today’s most advanced medical researchers.

    Why is this a fallacy? What is wrong about the above scenario? The problem isn’t so much with the historical narrative. The problem lies in an unwillingness to derive an alternative form of sovereignty appropriate for the new rhizomatic societies. Opponents of the reticular fallacy claim, in other words, that horizontality, distributed networks, anti-essentialism, etc., have their own forms of organization and control, and indeed should be analyzed accordingly. In the past I’ve used the concept of “protocol” to describe such a scenario as it exists in digital media infrastructure. Others have used different concepts to describe it in different contexts. On the whole, though, opponents of the reticular fallacy have not effectively made their case, myself included. The notion that rhizomatic structures are corrosive of power and sovereignty is still the dominant narrative today, evident across both popular and academic discourses. From talk of the “Twitter revolution” during the Arab Spring, to the ideologies of “disruption” and “flexibility” common in corporate management speak, to the putative egalitarianism of blog-based journalism, to the growing popularity of the Deleuzian and Latourian schools in philosophy and theory: all of these reveal the contemporary assumption that networks are somehow different from sovereignty, organization, and control.

    To summarize, the reticular fallacy refers to the following argument: since power and organization are defined in terms of verticality, centralization, essence, and foundation, the elimination of such things will prompt a general mollification if not elimination of power and organization as such. Such an argument is false because it doesn’t take into account the fact that power and organization may inhabit any number of structural forms. Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.

    Consider the kind of methods and concepts still popular in critical theory today: contingency, heterogeneity, anti-essentialism, anti-foundationalism, anarchism, chaos, plasticity, flux, fluidity, horizontality, flexibility. Such concepts are often praised and deployed in theories of the subject, analyses of society and culture, even descriptions of ontology and metaphysics. The reticular fallacy does not invalidate such concepts. But it does put them in question. We can not assume that such concepts are merely descriptive or neutrally empirical. Given the way in which horizontality, flexibility, and contingency are sewn into the mode of production, such “descriptive” claims are at best mirrors of the economic infrastructure and at worse ideologically suspect. At the same time, we can not simply assume that such concepts are, by nature, politically or ethically desirable in themselves. Rather, we ought to reverse the line of inquiry. The many qualities of rhizomatic systems should be understood not as the pure and innocent laws of a newer and more just society, but as the basic tendencies and conventional rules of protocological control.


    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here earlier in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay