b2o: boundary 2 online

Tag: Lisa Nakamura

  • Jonathan Beller — The Computational Unconscious

    Jonathan Beller — The Computational Unconscious

    Jonathan Beller

    God made the sun so that animals could learn arithmetic – without the succession of days and nights, one supposes, we should not have thought of numbers. The sight of day and night, months and years, has created knowledge of number, and given us the conception of time, and hence came philosophy. This is the greatest boon we owe to sight.
    – Plato, Timaeus

    The term “computational capital” understands the rise of capitalism as the first digital culture with universalizing aspirations and capabilities, and recognizes contemporary culture, bound as it is to electronic digital computing, as something like Digital Culture 2.0. Rather than seeing this shift from Digital Culture 1.0 to Digital Culture 2.0 strictly as a break, we might consider it as one result of an overall intensification in the practices of quantification. Capitalism, says Nick Dyer-Witheford (2012), was already a digital computer and shifts in the quantity of quantities lead to shifts in qualities. If capitalism was a digital computer from the get-go, then “the invisible hand”—as the non-subjective, social summation of the individualized practices of the pursuit of private (quantitative) gain thought to result in (often unknown and unintended) public good within capitalism—is an early, if incomplete, expression of the computational unconscious. With the broadening and deepening of the imperative toward quantification and rational calculus posited then presupposed during the early modern period by the expansionist program of Capital, the process of the assignation of a number to all qualitative variables—that is, the thinking in numbers (discernible in the commodity-form itself, whereby every use-value was also an encoded as an exchange-value)—entered into our machines and our minds. This penetration of the digital, rendering early on the brutal and precise calculus of the dimensions of cargo-holds in slave ships and the sparse economic accounts of ship ledgers of the Middle Passage, double entry bookkeeping, the rationalization of production and wages in the assembly line, and more recently, cameras and modern computing, leaves no stone unturned. Today, as could be well known from everyday observation if not necessarily from media theory, computational calculus arguably underpins nearly all productive activity and, particularly significant for this argument, those activities that together constitute the command-control apparatus of the world system and which stretch from writing to image-making and, therefore, to thought.[1] The contention here is not simply that capitalism is on a continuum with modern computation, but rather that computation, though characteristic of certain forms of thought, is also the unthought of modern thought. The content indifferent calculus of computational capital ordains the material-symbolic and the psycho-social even in the absence of a conscious, subjective awareness of its operations. As the domain of the unthought that organizes thought, the computational unconscious is structured like a language, a computer language that is also and inexorably an economic calculus.

    The computational unconscious allows us to propose that much contemporary consciousness (aka “virtuosity” in post-Fordist parlance) is a computational effect—in short, a form of artificial intelligence. A large part of what “we” are has been conscripted, as thought and other allied metabolic processes are functionalized in the service of the iron clad movements of code. While “iron clad” is now a metaphor and “code” is less the factory code and more computer code, understanding that the logic of industrial machinery and the bureaucratic structures of the corporation and the state have been abstracted and absorbed by discrete state machines to the point where in some quarters “code is law” will allow us to pursue the surprising corollary that all the structural inequalities endemic to capitalist production—categories that often appear under variants of the analog signs of race, class, gender, sexuality, nation, etc., are also deposited and thus operationally disappeared into our machines.

    Put simply, and, in deference to contemporary attention spans, too soon, our machines are racial formations. They are also technologies of gender and sexuality.[2] Computational capital is thus also racial capitalism, the longue durée digitization of racialization and, not in any way incidentally, of regimes of gender and sexuality. In other words inequality and structural violence inherent in capitalism also inhere in the logistics of computation and consequently in the real-time organization of semiosis, which is to say, our practices and our thought. The servility of consciousness, remunerated or not, aware of its underlying operating system or not, is organized in relation not just to sociality understood as interpersonal interaction, but to digital logics of capitalization and machine-technics. For this reason, the political analysis of postmodern and, indeed, posthuman inequality must examine the materiality of the computational unconscious. That, at least, is the hypothesis, for if it is the function of computers to automate thinking, and if dominant thought is the thought of domination, then what exactly has been automated?

    Already in the 1850s the worker appeared to Marx as a “conscious organ” in the “vast automaton” of the industrial machine, and by the time he wrote the first volume of Capital Marx was able to comment on the worker’s new labor of “watching the machine with his eyes and correcting its mistakes with his hands” (Marx 1867: 496, 502). Marx’s prescient observation with respect to the emergent role of visuality in capitalist production, along with his understanding that the operation of industrial machinery posits and presupposes the operation of other industrial machinery, suggests what was already implicit if not fully generalized in the analysis: that Dr. Ure’s notion, cited by Marx, of the machine as a “vast automaton,” was scalable—smaller machines, larger machines, entire factories could be thus conceived, and with the increasing scale and ubiquity of industrial machines, the notion could well describe the industrial complex as a whole. Historically considered, “watching the machine with his eyes and correcting the mistakes with his hands” thus appears as an early description of what information workers such as you and I do on our screens. To extrapolate: distributed computation and its integration with industrial process and the totality of social processes suggest that not only has society as a whole become a vast automaton profiting from the metabolism of its conscious organs, but further that the confrontation or interface with the machine at the local level (“where we are”) is an isolated and phenomenal experience that is not equivalent to the perspective of the automaton or, under capitalism, that of Capital. Given that here, while we might still be speaking about intelligence, we are not necessarily speaking about subjects in the strict sense, we might replace Althusser’s relation of S-s—Big Subject (God, the State, etc) to small subject (“you” who are interpellated with and in ideology)—with AI-ai— Big Artificial Intelligence (the world system as organized by computational capital) and “you” Little Artificial Intelligence (as organized by the same). Here subjugation is not necessarily intersubjective, and does not require recognition. The AI does not speak your language even if it is your operating system. With this in mind we may at once understand that the space-time regimes of subjectivity (point-perspective, linear time, realism, individuality, discourse function, etc.) that once were part of the digital armature of “the human,” have been profitably shattered, and that the fragments have been multiplied and redeployed under the requisites of new management. We might wager that these outmoded templates or protocols may still  also meaningfully refer to a register of meaning and conceptualization that can take the measure of historical change, if only for some kind of species remainder whose value is simultaneously immeasurable, unknown and hanging in the balance.

    Ironically perhaps, given the progress narratives attached to technical advances and the attendant advances in capital accumulation, Marx’s hypothesis in Capital Chapter 15, “Machinery and Large-Scale Industry,” that “it would be possible to write a whole history of the inventions made since 1830 for the purpose of providing capital with weapons against working class revolt” (1867, 563), casts an interesting light on the history of computing and its creation-imposition of new protocols. Not only have the incredible innovations of workers been abstracted and absorbed by machinery, but so also have their myriad antagonisms toward capitalist domination. Machinic perfection meant the imposition of continuity and the removal of “the hand of man” by fixed capital, in other words, both the absorption of know-how and the foreclosure of forms of disruption via automation (Marx 1867, 502).

    Dialectically understood, subjectivity, while a force of subjugation in some respects, also had its own arsenal of anti-capitalist sensibilities. As a way of talking about non-conformity, anti-sociality and the high price of conformity and its discontents, the unconscious still has its uses, despite its unavoidable and perhaps nostalgic invocation of a future that has itself been foreclosed. The conscious organ does not entirely grasp the cybernetic organism of which it is a part; nor does it fully grasp the rationale of its subjugation. If the unconscious was machinic, it is now computational, and if it is computational it is also locked in a struggle with capitalism. If what underlies perceptual and cognitive experience is the automaton, the vast AI, what I will be referring to as The Computer, which is the totalizing integration of global practice through informatic processes, then from the standpoint of production we constitute its unconscious. However, as we are ourselves unaware of our own constitution, the Unconscious of producers is their/our specific relation to what Paolo Virno acerbically calls, in what can only be a lamentation of history’s perverse irony, “the communism of capital” (2004, 110). If the revolution killed its father (Marx) and married its mother (Capitalism), it may be worth considering the revolutionary prospects of an analysis of this unconscious.

    Introduction: The Computational Unconscious

    Beginning with the insight that the rise of capitalism marks the onset of the first universalizing digital culture, this essay, and the book of which it is chapter one, develops the insights of The Cinematic Mode of Production (Beller 2006) in an effort to render the violent digital subsumption by computational racial capital that the (former) “humans” and their (excluded) ilk are collectively undergoing in a manner generative of sites of counter-power—of, let me just say it without explaining it, derivatives of counter-power, or, Derivative Communism. To this end, the following section offers a reformulation of Marx’s formula for capital, Money-Commodity-Money’ (M-C-M’), that accounts for distributed production in the social factory, and by doing so hopes to direct attention to zones where capitalist valorization might be prevented or refused. Prevented or refused not only to break a system which itself functions by breaking the bonds of solidarity and mutual trust that formerly were among the conditions that made a life worth living, but also to posit the redistribution of our own power towards ends that for me are still best described by the word communist (or perhaps meta-communist but that too is for another time). This thinking, political in intention, speculative in execution and concrete in its engagement, also proposes a revaluation of the aesthetic as an interface that sensualizes information. As such, the aesthetic is both programmed, and programming—a privileged site (and indeed mode) of confrontation in the digital apartheid of the contemporary.

    Along these lines, and similar to the analysis pursued in The Cinematic Mode of Production, I endeavor to de-fetishize a platform—computation itself—one that can only be properly understood when grasped as a means of production embedded in the bios. While computation is often thought of as being the thing accomplished by hardware churning through a program (the programmatic quantum movements of a discrete state machine), it is important to recognize that the universal Turing machine was (and remains) media indifferent only in theory and is thus justly conceived of as an abstract machine in the realm of ideas and indeed of the ruling ideas. However, it is an abstract machine that, like all abstractions, evolves out of concrete circumstances and practices; which is to say that the universal Turing Machine is itself an abstraction subject to historical-materialist critique. Furthermore, Turing Machines iterate themselves on the living, on life, reorganizing its practices. One might situate the emergence and function of the universal Turing machine as perhaps among the most important abstract machines in the last century, save perhaps that of capital itself. However, both their ranking and even their separability is here what we seek to put into question.

    Without a doubt, the computational process, like the capitalist process, has a corrosive effect on ontological precepts, accomplishing a far-reaching liquidation of tradition that includes metaphysical assumptions regarding the character of essence, being, authenticity and presence. And without a doubt, computation has been built even as it has been discovered. The paradigm of computation marks an inflection point in human history that reaches along temporal and spatial axes: both into the future and back into the past, out to the cosmos and into the sub-atomic. At any known scale, from plank time (10^-44 seconds) to yottaseconds (10^24 seconds), and from 10^-35 to 10^27 meters, computation, conceptualization and sense-making (sensation) have become inseparable. Computation is part of the historicity of the senses. Just ask that baby using an iPad.

    The slight displacement of the ontology of computation implicit in saying that it has been built as much as discovered (that computation has a history even if it now puts history itself at risk) allows us to glimpse, if only from what Laura Mulvey calls “the half-light of the imaginary” (1975, 7)—the general antagonism is feminized when the apparatus of capitalization has overcome the symbolic—that computation is not, so far as we can know, the way of the universe per se, but rather the way of the universe as it has become intelligible to us vis-à-vis our machines. The understanding, from a standpoint recognized as science, that computation has fully colonized the knowable cosmos (and is indeed one with knowing) is a humbling insight, significant in that it allows us to propose that seeing the universe as computation, as, in short, simulable, if not itself a simulation (the computational effect of an informatic universe), may be no more than the old anthropocentrism now automated by apparatuses. We see what we can see with the senses we have—autopoesis. The universe as it appears to us is figured by—that is, it is a figuration of—computation. That’s what our computers tell us. We build machines that discern that the universe functions in accord with their self-same logic. The recursivity effects the God trick.

    Parametrically translating this account of cosmic emergence into the domain of history, reveals a disturbing allegiance of computational consciousness organized by the computational unconscious, to what Silvia Federici calls the system of global apartheid. Historicizing computational emergence pits its colonial logic directly against what Fred Moten and Stefano Harney identify as “the general antagonism” (2013, 10) (itself the reparative antithesis, or better perhaps the reverse subsumption of the general intellect as subsumed by capital). The procedural universalization of computation is a cosmology that attributes and indeed enforces a sovereignty tantamount to divinity and externalities be damned. Dissident, fugitive planning and black study – a studied refusal of optimization, a refusal of computational colonialism — may offer a way out of the current geo-(post-)political and its computational orthodoxy.

    Computational Idolatry and Multiversality

    In the new idolatry cathetcted to inexorable computational emergence, the universe is itself currently imagined as a computer. Here’s the seductive sound of the current theology from a conference sponsored by the sovereign state of NYU:

    As computers become progressively faster and more powerful, they’ve gained the impressive capacity to simulate increasingly realistic environments. Which raises a question familiar to aficionados of The Matrix—might life and the world as we know it be a simulation on a super advanced computer? “Digital physicists” have developed this idea well beyond the sci-fi possibilities, suggesting a new scientific paradigm in which computation is not just a tool for approximating reality but is also the basis of reality itself. In place of elementary particles, think bits; in place of fundamental laws of physics, think computer algorithms. (Scientific American 2011)

    Science fiction, in the form of “the Matrix,” is here used to figure a “reality” organized by simulation, but then this reality is quickly dismissed as something science has moved well beyond. However, it would not be illogical here to propose that “reality” is itself a science fiction—a fiction whose current author is no longer the novel or Hollywood but science. It is in a way no surprise that, consistent with “digital physics,” MIT physicist, Max Tegmark, claims that consciousness is a state of matter: Consciousness as a phenomenon of information storage and retrieval, is a property of matter described by the term “computronium.” Humans represent a rather low level of complexity. In the neo-Hegelian narrative in which the philosopher—scientist reveals the working out of world—or, rather, cosmic—spirit, one might say that it is as science fiction—one of the persistent fictions licensed by science—that “reality itself” exists at all. We should emphasize that the trouble here is not so much with “reality,” the trouble here is with “itself.” To the extent that we recognize that poesis (making) has been extended to our machines and it is through our machines that we think and perceive, we may recognize that reality is itself a product of their operations. The world begins to look very much like the tools we use to perceive it to the point that Reality itself is thus a simulation, as are we—a conclusion that concurs with the notion of a computational universe, but that seems to (conveniently) elide the immediate (colonial) history of its emergence. The emergence of the tools of perception is taken as universal, or, in the language of a quantum astrophysics that posits four levels of multiverses: multiversal. In brief, the total enclosure by computation of observer and observed is either reality itself becoming self-aware, or tautological, waxing ideological, liquidating as it does historical agency by means of the suddenly a priori stochastic processes of cosmic automation.

    Well! If total cosmic automation, then no mistakes, so we may as well take our time-bound chances and wager on fugitive negation in the precise form of a rejection of informatic totalitarianism. Let us sound the sedimented dead labor inherent in the world-system, its emergent computational armature and its iconic self-representations. Let us not forget that those machines are made out of embodied participation in capitalist digitization, no matter how disappeared those bodies may now seem. Marx says, “Consciousness is… from the very beginning a social product and remains so for as long as men exist at all” (Tucker 1978, 178). The inescapable sociality and historicity of knowledge, in short, its political ontology, follows from this—at least so long as humans “exist at all.”

    The notion of a computational cosmos, though not universally or even widely consented to by scientific consciousness, suggests that we respire in an aporiatic space—in the null set (itself a sign) found precisely at the intersection of a conclusion reached by Gödel in mathematics (Hofstadter 1979)—that there is no sufficiently powerful logical system that is internally closed such that logical statements cannot be formulated that can neither be proved nor disproved—and a different conclusion reached by Maturana and Varela (1992), and also Niklas Luhmann (1989), that a system’s self-knowing, its autopoesis, knows no outside; it can know only in its own terms and thus knows only itself. In Gödel’s view, systems are ineluctably open, there is no closure, complete self-knowledge is impossible and thus there is always an outside or a beyond, while in the latter group’s view, our philosophy, our politics and apparently our fate is wedded to a system that can know no outside since it may only render an outside in its own terms, unless, or perhaps, even if/as that encounter is catastrophic.

    Let’s observe the following: 1) there must be an outside or a beyond (Gödel); 2) we cannot know it (Maturana and Varela); 3) and yet…. In short, we don’t know ourselves and all we know is ourselves. One way out of this aporia is to say that we cannot know the outside and remain what we are. Enter history: Multiversal Cosmic Knoweldge, circa 2017, despite its awesome power, turns out to be pretty local. If we embrace the two admittedly humbling insights regarding epistemic limits—on the one hand, that even at the limits of computationally—informed knowledge (our autopoesis) all we can know is ourselves, along with Gödel’s insight that any “ourselves” whatsoever that is identified with what we can know is systemically excluded from being All—then it as axiomatic that nothing (in all valences of that term) fully escapes computation—for us. Nothing is excluded from what we can know except that which is beyond the horizon of our knowledge, which for us is precisely nothing. This is tantamount to saying that rational epistemology is no longer fully separable from the history of computing—at least for any us who are, willingly or not, participant in contemporary abstraction. I am going to skip a rather lengthy digression about fugitive nothing as precisely that bivalent point of inflection that escapes the computational models of consciousness and the cosmos, and just offer its conclusion as the next step in my discussion: We may think we think—algorithmically, computationally, autonomously, or howsoever—but the historically materialized digital infrastructure of the socius thinks in and through us as well. Or, as Marx put it, “The real subject remains outside the mind and independent of it—that is to say, so long as the mind adopts a purely speculative, purely theoretical attitude. Hence the subject, society, must always be envisaged as the premises of conception even when the theoretical method is employed” (Marx: vol. 28, 38-39).[3]

    This “subject, society” in Marx’s terms, is present even in its purported absence—it is inextricable from and indeed overdetermines theory and, thus, thought: in other words, language, narrative, textuality, ideology, digitality, cosmic consciousness. This absent structure informs Althusser’s Lacanian-Marxist analysis of Ideology (and of “the ideology of no ideology,” 1977) as the ideological moment par excellance: an analog way of saying “reality” is simulation) as well as his beguiling (because at once necessary and self-negating) possibility of a subjectless scientific discourse. This non-narrative, unsymbolizeable absent structure akin to the Lacanian “Real” also informs Jameson’s concept of the political unconscious as the black-boxed formal processor of said absent structure, indicated in his work by the term “History” with a capital “H” (1981).  We will take up Althusser and Jameson in due time (but not in this paper). For now, however, for the purposes of our mediological investigation, it is important to pursue the thought that precisely this functional overdetermination, which already informed Marx’s analysis of the historicity of the senses in the 1844 manuscripts, extends into the development of the senses and the psyche. As Jameson put it in The Political Unconscious thirty-five years ago: “That the structure of the psyche is historical and has a history, is… as difficult for us to grasp as that the senses are not themselves natural organs but rather the result of a long process of differentiation even within human history”(1981, 62).

    The evidence for the accuracy of this claim, built from Marx’s notion that “the forming of the five senses requires the history of the world down to the present” has been increasing. There is a host of work on the inseparability of technics and the so-called human (from Mauss to Simondon, Deleuze and Guattari, and Bernard Stiegler) that increasingly makes it possible to understand and even believe that the human, along with consciousness, the psyche, the senses and, consequently, the unconscious are historical formations. My own essay “The Unconscious of the Unconscious” from The Cinematic Mode of Production traces Lacan’s use of “montage,” “the cut,” the gap, objet a, photography and other optical tropes and argues (a bit too insistently perhaps) that the unconscious of the unconscious is cinema, and that a scrambling of linguistic functions by the intensifying instrumental circulation of ambient images (images that I now understand as derivatives of a larger calculus) instantiates the presumably organic but actually equally technical cinematic black box known as the unconscious.[iv] Psychoanalysis is the institutionalization of a managerial technique for emergent linguistic dysfunction (think literary modernism) precipitated by the onslaught of the visible.

    More recently, and in a way that suggests that the computational aspects of historical materialist critique are not as distant from the Lacanian Real as one might think, Lydia Liu’s The Freudian Robot (2010) shows convincingly that Lacan modeled the theory of the unconscious from information theory and cybernetic theory. Liu understands that Lacan’s emphasis on the importance of structure and the compulsion to repeat is explicitly addressed to “the exigencies of chance, randomness, and stochastic processes in general” (2010, 176). She combs Lacan’s writings for evidence that they are informed by information theory and provides us with some smoking guns including the following:

    By itself, the play of the symbol represents and organizes, independently of the peculiarities of its human support, this something which is called the subject. The human subject doesn’t foment this game, he takes his place in it, and plays the role of the little pluses and minuses in it. He himself is an element in the chain which, as soon as it is unwound, organizes itself in accordance with laws. Hence the subject is always on several levels, caught up in the crisscrossing of networks. (quoted in Liu 2010, 176)

    Liu argues that “the crisscrossing of networks” alludes not so much to linguistic networks but to communication networks, and precisely references the information theory that Lacan read, particularly that of George Gilbaud, the author of What is Cybernetics?. She writes that, “For Lacan, ‘the primordial couple of plus and minus’ or the game of even and odd should precede linguistic considerations and is what enables the symbolic order.”

    “You can play heads or tails by yourself,” says Lacan, “but from the point of view of speech, you aren’t playing by yourself – there is already the articulation of three signs comprising a win or a loss and this articulation prefigures the very meaning of the result. In other words, if there is no question, there is no game, if there is no structure, there is no question. The question is constituted, organized by the structure” (quoted in Liu 2010, 179). Liu comments that “[t]his notion of symbolic structure, consistent with game theory, [has] important bearings on Lacan’s paradoxically non-linguistic view of language and the symbolic order.”

    Let us not distract ourselves here with the question of whether or not game theory and statistical analysis represent discovery or invention. Heisenberg, Schrödinger, and information theory formalized the statistical basis that one way or another became a global (if not also multiversal) episteme. Norbert Wiener, another father, this time of cybernetics, defined statistics as “the science of distribution” (Weiner 1989, 8). We should pause here to reflect that, given that cybernetic research in the West was driven by military and, later, industrial applications, that is, applications deemed essential for the development of capitalism and the capitalist way of life, such a statement calls for a properly dialectical analysis. Distribution is inseparable from production under capitalism, and statistics is the science of this distribution. Indeed, we would want to make such a thesis resonate with the analysis of logistics recently undertaken by Moten and Harney and, following them, link the analysis of instrumental distribution to the Middle Passage, as the signal early modern consequence of the convergence of rationalization and containerization—precisely the “science” of distribution worked out in the French slave ship Adelaide or the British ship Brookes. For the moment, we underscore the historicity of the “science of distribution” and thus its historical emergence as socio-symbolic system of organization and control. Keeping this emergence clearly in mind helps us to understand that mathematical models quite literally inform the articulation of History and the unconscious—not only homologously as paradigms in intellectual history, but materially, as ways of organizing social production in all domains. Whether logistical, optical or informatic, the technics of mathematical concepts, which is to say programs, orchestrate meaning and constitute the unconscious.

    Perhaps more elusive even than this historicity of the unconscious grasped in terms of a digitally encoded matrix of materiality and epistemology that constitutes the unthought of subjective emergence, may be that the notion that the “subject, society” extends into our machines. Vilém Flusser, in Towards a Philosophy of Photography, tells us,

    Apparatuses were invented to simulate specific thought processes. Only now (following the invention of the computer), and as it were in hindsight, it is becoming clear what kind of thought processes we are dealing with in the case of all apparatuses. That is: thinking expressed in numbers. All apparatuses (not just computers) are calculating machines and in this sense “artificial intelligences,” the camera included, even if their inventors were not able to account for this. In all apparatuses (including the camera) thinking in numbers overrides linear, historical thinking. (Flusser 2000, 31)

    This process of thinking in numbers, and indeed the generalized conversion of multiple forms of thought and practice to an increasingly unified systems language of numeric processing, by capital markets, by apparatuses, by digital computers requires further investigation. And now that the edifice of computation—the fixed capital dedicated to computation that either recognizes itself as such or may be recognized as such—has achieved a consolidated sedimentation of human labor at least equivalent to that required to build a large nation (a superpower) from the ground up, we are in a position to ask in what way has capital-logic and the logic of private property, which as Marx points out is not the cause but the effect of alienated wage- (and thus quantified) labor, structured computational paradigms? In what way has that “subject, society” unconsciously structured not just thought, but machine-thought? Thinking, expressed in numbers, materialized first by means of commodities and then in apparatuses capable of automating this thought. Is computation what we’ve been up to all along without knowing it? Flusser suggests as much through his notion that 1) the camera is a black box that is a programme, and, 2) that the photograph or technical image produces a “magical” relation to the world in as much as people understand the photograph as a window rather than as information organized by concepts. This amounts to the technical image as itself a program for the bios and suggests that the world has long been unconsciously organized by computation vis-à-vis the camera. As Flusser has it, cameras have organized society in a feedback loop that works towards the perfection of cameras. If the computational processes inherent in photography are themselves an extension of capital logic’s universal digitization (an argument I made in The Cinematic Mode of Production and extended in The Message is Murder), then that calculus has been doing its work in the visual reorganization of everyday life for almost two centuries.

    Put another way, thinking expressed in numbers (the principles of optics and chemistry) materialized in machines automates thought (thinking expressed in numbers) as program. The program of say, the camera, functions as a historically produced version of what Katherine Hayles has recently called “nonconscious cognition” (Hayles 2016). Though locally perhaps no more self-aware than the sediment sorting process of a riverbed (another of Hayles’s computational examples) the camera nonetheless affects purportedly conscious beings from the domain known as the unconscious, as, to give but one shining example, feminist film theory clearly shows: The function of the camera’s program organizes the psycho-dynamics of the spectator in a way that at once structures film form through market feedback, gratifies the (white-identified) male ego and normalizes the violence of heteropatriarchy, and does so at a profit. Now that so much human time has gone into developing cameras, computer hardware and programming, such that hardware and programming are inextricable from the day to day and indeed nano-second to nano-second organization of life on planet earth (and not only in the form of cameras), we can ask, very pointedly, which aspects of computer function, from any to all, can be said to be conditioned not only by sexual difference but more generally still, by structural inequality and the logistics of racialization? Which computational functions perpetuate and enforce these historically worked up, highly ramified social differences ? Structural and now infra-structural inequalities include social injustices—what could be thought of as and in a certain sense are  algorithmic racism, sexism and homophobia, and also programmatically unequal access to the many things that sustain life, and legitimize murder (both long and short forms, executed by, for example, carceral societies, settler colonialism, police brutality and drone strikes), and catastrophes both unnatural (toxic mine-tailings, coltan wars) and purportedly natural (hurricanes, droughts, famines, ambient environmental toxicity). The urgency of such questions resulting from the near automation of geo-political emergence along with a vast conscription of agents is only exacerbated as we recognize that we are obliged to rent or otherwise pay tribute (in the form of attention, subscription, student debt) to the rentier capitalists of the infrastructure of the algorithm in order to access portions of the general intellect from its proprietors whenever we want to participate in thinking.

    For it must never be assumed that technology (even the abstract machine) is value-neutral, that it merely exists in some uninterested ideal place and is then utilized either for good or for ill by free men (it would be “men” in such a discourse). Rather, the machine, like Ariella Azoulay’s understanding of photography, has a political ontology—it is a social relation, and an ongoing one whose meaning is, as Azoulay says of the photograph, never at an end (2012, 25). Now that representation has been subsumed by machines, has become machinic (overcoded as Deleuze and Guattari would say) everything that appears, appears in and through the machine, as a machine. For the present (and as Plato already recognized by putting it at the center of the Republic), even the Sun is political. Going back to my opening, the cosmos is merely a collection of billions of suns—an infinite politics.

    But really, this political ontology of knowledge, machines, consciousness, praxis should be obvious. How could technology, which of course includes the technologies of knowledge, be anything other than social and historical, the product of social relations? How could these be other than the accumulation, objectification and sedminentation of subjectivities that are themselves an historical product? The historicity of knowledge and perception seems inescapable, if not fully intelligible, particularly now, when it is increasingly clear that it is the programmatic automation of thought itself that has been embedded in our apparatuses. The programming and overdetermination of “choice,” of options, by a rationality that was itself embedded in the interested circumstances of life and continuously “learns” vis-à-vis the feedback life provides has become ubiquitous and indeed inexorable (I dismiss “Object Oriented Ontology” and its desperate effort to erase white-boy subjectivity thusly: there are no ontological objects, only instrumental epistemic horizons). To universalize contemporary subjectivity by erasing its conditions of possibility is to naturalize history; it is therefore to depoliticize it and therefore to recapitulate its violence in the present.

    The short answer then regarding digital universality is that technology (and thus perception, thought and knowledge) can only be separated from the social and historical—that is, from racial capitalism—by eliminating both the social and historical (society and history) through its own operations. While computers, if taken as a separate constituency along with a few of their biotic avatars, and then pressed for an answer, might once have agreed with Margaret Thatcher’s view that “there is no such thing as society,” one would be hard-pressed to claim that this post-sociological (and post-Birmingham) “discovery” is a neutral result. Thatcher’s observation, that “the problem with socialism is that you eventually run out of other people’s money,” while admittedly pithy, if condescending, classist and deadly, subordinates social needs to existing property-relations and their financial calculus at the ontological level. She smugly valorizes the status quo by positing capitalism as an untranscendable horizon since the social product is by definition always already “other people’s money.” But neoliberalism has required some revisioning of late (which is a polite way of saying that fascism has needed some updating): the newish but by now firmly-established term “social media” tells us something more about the parasitic relation that the cold calculus this mathematical universe of numbers has to the bios. To preserve global digital apartheid requires social media, the process(ing) of society itself cybernetically-interfaced with the logistics of racial-capitalist computation. This relation, a means of digital expropriation aimed to profitably exploit an equally significant global aspiration towards planetary communicativity and democratization, has become the preeminent engine of capitalist growth. Society, at first seemingly negated by computation and capitalism, is now directly posited as a source of wealth, for what is now explicitly computational capital and actually computational racial capital. The attention economy, immaterial labor, neuropower, semio-capitalism: all of these terms, despite their differences, mean in effect that society, as a deterritorialized factory, is no longer disappeared as an economic object; it disappears only as a full beneficiary of the dominant economy which is now parasitical on its metabolism. The social revolution in planetary communicativity is being farmed and harvested by computational capitalism.

    Dialectics of the Human-Machine

    For biologists it has become au courant when speaking of humans to speak also of the second genome—one must consider not just the 26 chromosomes of the human genome that replicate what was thought of as the human being as an autonomous life-form, but the genetic information and epigenetic functionality of all the symbiotic bacteria and other organisms without which there are no humans. Pursuant to this thought, we might ascribe ourselves a third genome: information. No good scientist today believes that human beings are free standing forms, even if most (or really almost all) do not make the critique of humanity or even individuality through a framework that understands these categories as historically emergent interfaces of capitalist exchange. However, to avoid naturalizing the laws of capitalism as simply an expression of the higher (Hegalian) laws of energetics and informatics (in which, for example ATP can be thought to function as “capital”), this sense of “our” embeddedness in the ecosystem of the bios must be extended to that of the materiality of our historical societies, and particularly to their systems of mediation and representational practices of knowledge formation—including the operations of  textuality, visuality, data visualization and money—which, with convergence today, means precisely, computation.

    If we want to understand the emergence of computation (and of the anthropocene), we must attend to the transformations and disappearances of life forms—of forms of life in the largest sense. And we must do so in spite of the fact that the sedimentation of the history of computation would neutralize certain aspects of human aspiration and of humanity—including, ultimately, even the referent of that latter sign—by means of law, culture, walls, drones, derivatives, what have you. The biosynthetic process of computation and human being gives rise to post-humanism only to reveal that there were never any humans here in the first place: We have never been human—we know this now. “Humanity,” as a protracted example of maiconaissance—as a problem of what could be called the humanizing-machine or, better perhaps, the human-machine, is on the wane.

    Naming the human-machine, is of course a way of talking about the conquest, about colonialism, slavery, imperialism, and the racializing, sex-gender norm-enforcing regimes of the last 500 years of capitalism that created the ideological legitimation of its unprecedented violence in the so-called humanistic values it spat out. Aimé Césaire said it very clearly when he posed the scathing question in Discourse on Colonialism: “Civilization and Colonization?” (1972). “The human-machine” names precisely the mechanics of a humanism that at once resulted from and were deployed to do the work of humanizing planet Earth for the quantitative accountings of capital while at the same time divesting a large part of the planetary population of any claims to the human. Following David Golumbia, in The Cultural Logic of Computation (2009), we might look to Hobbes, automata and the component parts of the Leviathan for “human” emergence as a formation of capital. For so many, humanism was in effect more than just another name for violence, oppression, rape, enslavement and genocide—it was precisely a means to violence. “Humanity” as symptom of The Invisible Hand, AI’s avatar. Thus it is possible to see the end of humanism as a result of decolonization struggles, a kind of triumph. The colonized have outlasted the humans. But so have the capitalists.

    This is another place where recalling the dialectic is particularly useful. Enlightenment Humanism was a platform for the linear time of industrialization and the French revolution with “the human” as an operating system, a meta-ISA emerging in historical movement, one that developed a set of ontological claims which functioned in accord with the early period of capitalist digitality. The period was characterized by the institutionalization of relative equality (Cedric Robinson does not hesitate to point out that the precondition of the French Revolution was colonial slavery), privacy, property. Not only were its achievements and horrors inseparable the imposition of logics of numerical equivalence, they were powered by the labor of the peoples of Earth, by the labor-power of disparate peoples, imported as sugar and spices, stolen as slaves, music and art, owned as objective wealth in the form of lands, armies, edifices and capital, and owned again as subjective wealth in the form of cultural refinement, aesthetic sensibility, bourgeois interiority—in short, colonial labor, enclosed by accountants and the whip, was expatriated as profit, while industrial labor, also expropriated, was itself sustained by these endeavors. The accumulation of the wealth of the world and of self-possession for some was organized and legitimated by humanism, even as those worlded by the growth of this wealth struggled passionately, desultorily, existentially, partially and at times absolutely against its oppressive powers of objectification and quantification. Humanism was colonial software, and the colonized were the outsourced content providers—the first content providers—recruited to support the platform of so-called universal man. This platform humanism is not so much a metaphor; rather it is the tendency that is unveiled by the present platform post-humanism of computational racial capital. The anatomy of man is the key to the anatomy of the ape, as Marx so eloquently put the telos of man. Is the anatomy of computation the key to the anatomy of “man”?

    So the end of humanism, which in a narrow (white, Euro-American, technocratic) view seems to arrive as a result of the rise of cyber-technologies, must also be seen as having been long willed and indeed brought about by the decolonizing struggles against humanism’s self-contradictory and, from the point of view of its own self-proclaimed values, specious organization. Making this claim is consistent with Césaire’s insight that people of the third world built the European metropoles. Today’s disappearance of the human might mean for the colonizers who invested so heavily in their humanisms, that Dr. Moreau’s vivisectioned cyber-chickens are coming home to roost. Fatally, it seems, since Global North immigration policy, internment centers, border walls, police forces give the lie to any pretense of humanism. It might be gleaned that the revolution against the humans has also been impacted by our machines. However, the POTUSian defeat of the so-called humans is double-edged to say the least. The dialectic of posthuman abundance on the one hand and the posthuman abundance of dispossession on the other has no truck with humanity. Today’s mainstream futurologists mostly see “the singularity” and apocalypse. Critics of the posthuman with commitments to anti-racist world-making have clearly understood the dominant discourse on the posthuman as not the end of the white liberal human subject but precisely, when in the hands of those not committed to an anti-racist and decolonial project as a means for its perpetuation—a way of extending the unmarked, transcendental, sovereign, subject (of Hobbes, Descartes, C.B. Macpherson)—effectively the white male sovereign who was in possession of a body rather than forced to be a body. Sovereignty itself must change (in order, as Guiseppe Lampedusa taught us, to remain the same), for if one sees production and innovation on the side of labor, then capital’s need to contain labors’ increasing self-organization has driven it into a position where the human has become an impediment to its continued expansion. Human rights, though at times also a means to further expropriation, are today in the way.

    Let’s say that it is global labor that is shaking off the yoke of the human from without, as much as it the digital machines that are devouring it from within. The dialectic of computational racial capital devours the human as a way of revolutionizing the productive forces. Weapon-makers, states, and banks, along with Hollywood and student debt, invoke the human only as a skeuomorph—an allusion to an old technology that helps facilitate adoption of the new. Put another way, the human has become a barrier to production, it is no longer a sustainable form. The human, and those (human and otherwise) falling under the paradigm’s dominion, must be stripped, cut, bundled, reconfigured in derivative forms. All hail the dividual. Again, female and racialized bodies and subjects have long endured this now universal fragmentation and forced recomposition and very likely dividuality may also describe a precapitalist, pre-colonial interface with the social. However we are obliged to point out that this, the current dissolution of the human into the infrastructure of the world-system, is double-edged, neither fully positive, nor fully negative—the result of the dialectics of struggles for liberation distributed around the planet. As a sign of the times, posthumanism may be, as has been remarked about capitalism itself, among those simultaneously best and worst things to ever happen in history. On the one hand, the disappearance of presumably ontological protections and legitimating status for some (including the promise of rights never granted to most), on the other, the disappearance of a modality of dehumanization and exclusion that legitimated and normalized white supremacist patriarchy by allowing its values to masquerade as universals. However, it is difficult to maintain optimism of the will when we see that that which is coming, that which is already upon us may also be as bad or worse, in absolute numbers, is already worse, for unprecedented billions of concrete individuals. Frankly, in a world where the cognitive-linguistic functions of the species have themselves been captured by the ambient capitalist computation of social media and indeed of capitalized computational social relations, of what use is a theory of dispossession to the dispossessed?

    For those of us who may consider ourselves thinkers, it is our burden—in a real sense, our debt, living and ancestral—to make theory relevant to those who haunt it. Anything less is betrayal. The emergence of the universal value form (as money, the general form of wealth) with its human face (as white-maleness, the general form of humanity) clearly inveighs against the possibility of extrinsic valuation since the very notion of universal valuation is posited from within this economy. What Cedric Robinson shows in his extraordinary Black Marxism (1983) is that capitalism itself is a white mythology. The history of racialization and capitalization are inseparable, and the treatment of capital as a pure abstraction deracinates its origins and functions – both its conditions of possibility as well as its operations—including those of the internal critique of capitalism that has been the basis of much of the Marxist tradition. Both capitalism and its negation as Marxism have proceeded through a disavowal of racialization. The quantitative exchange of equivalents, circulating as exchange values without qualities, are the real abstractions that give rise to philosophy, science, and white liberal humanism wedded to the notion of the objective. Therefore, when it comes to values, there is no degree zero, only perhaps nodal points of bounded equilibrium. To claim neutrality for an early digital machine, say, money, that is, to argue that money as a medium is value-neutral because it embodies what has (in many respects correctly, but in a qualified way) been termed “the universal value form,” would be to miss the entire system of leveraged exploitation that sustains the money-system. In an isolated instance, money as the product of capital might be used for good (building shelters for the homeless) or for ill (purchasing Caterpillar bulldozers) or both (building shelters using Caterpillar machines), but not to see that the capitalist-system sustains itself through militarized and policed expropriation and large-scale, long-term universal degradation is to engage in mere delusional, utopianism and self-interested (might one even say psychotic?) naysaying.

    Will the apologists calmly bear witness to the sacrifice of billions of human beings so that the invisible hand may placidly unfurl its/their abstractions in Kubrikian sublimity? 2001’s (Kubrick 1968) cold longshot of the species lifespan as an instance of a cosmic program is not so distant from the endemic violence of postmodern—and, indeed, post-human—fascism he depicted in A Clockwork Orange (Kubrick 1971). Arguably, 2001 rendered the cosmology of early Posthuman Fascism while A Clockwork Orange portrayed its psychology. Both films explored the aesthetics of programming. For the individual and for the species, what we beheld in these two films was the annihilation of our agency (at the level of the individual and of the species) —and it was eerily seductive, Benjamin’s self-destruction as an aesthetic pleasure of the highest order taken to cosmic proportions and raised to the level of Art (1969).

    So what of the remainders of those who may remain? Here, in the face of the annihilation of remaindered life (to borrow a powerfully dialectical term from Neferti Tadiar, 2016) by various iterations of techné, we are posing the following question: how are computers and digital computing, as universals, themselves an iteration of long-standing historical inequality, violence, and murder, and what are the entry points for an understanding of computation-society in which our currently pre-historic (in Marx’s sense of the term) conditions of computation might be assessed and overcome? This question of technical overdetermination is not a matter of a Kittlerian-style anti-humanism in which “media determine our situation,” nor is it a matter of the post-Kittlerian, seemingly user-friendly repurposing of dialectical materialism which in the beer-drinking tradition of “good-German” idealism, offers us the poorly historicized, neo-liberal idea of “cultural techniques” courtesy of Cornelia Vismann and Bernhard Siegert (Vismann 2013, 83-93; Siegert 2013, 48-65). This latter is a conveniently deracinated way of conceptualizing the distributed agency of everything techno-human without having to register the abiding fundamental antagonisms, the life and death struggle, in anything. Rather, the question I want to pose about computing is one capable of both foregrounding and interrogating violence, assigning responsibility, making changes, and demanding reparations. The challenge upon us is to decolonize computing. Has the waning not just of affect (of a certain type) but of history itself brought us into a supposedly post-historical space? Can we see that what we once called history, and is now no longer, really has been pre-history, stages of pre-history? What would it mean to say in earnest “What’s past is prologue?”[6] If the human has never been and should never be, if there has been this accumulation of negative entropy first via linear time and then via its disruption, then what? Postmodernism, posthumanism, Flusser’s post-historical, and Berardi’s After the Future notwithstanding, can we take the measure of history?

    Hollerith punch card (image source: Library of Congress, http://memory.loc.gov/mss/mcc/023/0008.jpg)
    Figure 1. Hollerith punch card (image source: Library of Congress)

    Techno-Humanist Dehumanization

    I would like to conclude this essay with a few examples of techno-humanist dehumanization. In 1889, Herman Hollerith patented the punchcard system and mechanical tabulator that was used in the 1890 censuses in Germany, England, Italy, Russia, Austria, Canada, France, Norway, Puerto Rico, Cuba, and the Philippines. A national census, which normally took eight to ten years now took a single year. The subsequent invention of the plugboard control panel in 1906 allowed for tabulators to perform multiple sorts in whatever sequence was selected without having to be rebuild the tabulators—an early form of programming. Hollerith’s Tabulating Machine Company merged with three other companies in 1911 to become the Computing Tabulating Recording Company, which renamed itself IBM in 1924.

    While the census opens a rich field of inquiry that includes questions of statistics, computing, and state power that are increasingly relevant today (particularly taking into account the ever-presence of the NSA), for now I only want to extract two points: 1) humans became the fodder for statistical machines and 2) as Vince Rafael has shown regarding the Philippine census and as Edwin Black has shown with respect to the holocaust, the development of this technology was inseparable from racialization and genocide (Rafael 2000; Black 2001)

    Rafael shows that coupled to photographic techniques, the census at once “discerned” and imposed a racializing schema that welded historical “progress” to ever-whiter waves of colonization, from Malay migration to Spanish Colonialism to U.S. Imperialism (2000) Racial fantasy meets white mythology meets World Spirit. For his part, Edwin Black (2001) writes:

    Only after Jews were identified—a massive and complex task that Hitler wanted done immediately—could they be targeted for efficient asset confiscation, ghettoization, deportation, enslaved labor, and, ultimately, annihilation. It was a cross-tabulation and organizational challenge so monumental, it called for a computer. Of course, in the 1930s no computer existed.

    But IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his persecution of the Jews. Historians have always been amazed at the speed and accuracy with which the Nazis were able to identify and locate European Jewry. Until now, the pieces of this puzzle have never been fully assembled. The fact is, IBM technology was used to organize nearly everything in Germany and then Nazi Europe, from the identification of the Jews in censuses, registrations, and ancestral tracing programs to the running of railroads and organizing of concentration camp slave labor.

    IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards Hitler needed (Black 2001).

    The sorting of populations and individuals by forms of social difference including “race,” ability and sexual preference (Jews, Roma, homosexuals, people deemed mentally or physically handicapped) for the purposes of sending people who failed to meet Nazi eugenic criteria off to concentration camps to be dispossessed, humiliated, tortured and killed, means that some aspects of computer technology—here, the Search Engine—emerged from this particular social necessity sometimes called Nazism (Black 2001). The Philippine-American War, in which Americans killed between 1/10th and 1/6th of the population of the Philippines, and the Nazi-administered holocaust are but two world historical events that are part of the meaning of early computational automation. We might say that computers bear the legacy of imperialism and fascism—it is inscribed in their operating systems.

    The mechanisms, as well as the social meaning of computation, were refined in its concrete applications. The process of abstraction hid the violence of abstraction, even as it integrated the result with economic and political protocols and directly effected certain behaviors. It is a well-known fact that Claude Shannon’s landmark paper, “A Mathematical Theory of Communication,” proposed a general theory of communication that was content-indifferent (1948, 379-423). This seminal work created a statistical, mathematical model of communication while simultaneously consigning any and all specific content to irrelevance as regards the transmission method itself. Like use-value under the management of the commodity form, the message became only a supplement to the exchange value of the code. Elsewhere I have more to say about the fact that some of the statistical information Shannon derived about letter frequency in English used as its ur-text, Jefferson The Virginian (1948), the first volume of Dumas Malone’s monumental six volume study of Jefferson, famously interrogated by Annette Gordon-Reed in her Thomas Jefferson and Sally Hemmings: An American Controversy for its suppression of information regarding Jefferson’s relation to slavery (1997).[7] My point here is that the rules for content indifference were themselves derived from a particular content and that the language used as a standard referent was a specific deployment of language. The representative linguistic sample did not represent the whole of language, but language that belongs to a particular mode of sociality and racialized enfranchisement. Shannon’s deprivileging of the referent of the logos as referent, and his attention only to the signifiers, was an intensification of the slippage of signifier from signified (“We, the people…”) already noted in linguistics and functionally operative in the elision of slavery in Jefferson’s biography, to say nothing of the same text’s elision of slave-narrative and African-American speech. Shannon brilliantly and successfully developed a re-conceptualization of language as code (sign system) and now as mathematical code (numerical system) that no doubt found another of its logical (and material) conclusions (at least with respect to metaphysics) in post-structuralist theory and deconstruction, with the placing of the referent under erasure. This recession of the real (of being, the subject, and experience—in short, the signified) from codification allowed Shannon’s mathematical abstraction of rules for the transmission of any message whatsoever to become the industry standard even as they also meant, quite literally, the dehumanization of communication—its severance from a people’s history.

    In a 1987 interview, Shannon was quoted as saying “I can visualize a time in the future when we will be to robots as dogs are to humans…. I’m rooting for the machines!” (1971). If humans are the robot’s companion species, they (or is it we?) need a manifesto. The difficulty is that the labor of our “being” such that it is/was is encrypted in their function. And “we” have never been “one.”

    Tara McPherson has brilliantly argued that the modularity achieved in the development of UNIX has its analogue in racial segregation. Modularity and encapsulation, necessary to the writing of UNIX code that still underpins contemporary operating systems were emergent general socio-technical forms, what we might call technologies, abstract machines, or real abstractions. “I am not arguing that programmers creating UNIX at Bell Labs and at Berkeley were consciously encoding new modes of racism and racial understanding into digital systems,” McPherson argues, “The emergence of covert racism and its rhetoric of colorblindness are not so much intentional as systemic. Computation is a primary delivery method of these new systems and it seems at best naïve to imagine that cultural and computational operating systems don’t mutually infect one another.” (in Nakamura 2012, 30-31; italics in original)

    This is the computational unconscious at work—the dialectical inscription and re-inscription of sociality and machine architecture that then becomes the substrate for the next generation of consciousness, ad infinitum. In a recent unpublished paper entitled “The Lorem Ipsum Project,” Alana Ramjit (2014) examines industry standards for the now-digital imaging of speech and graphic images. These include Kodak’s “Shirley cards” for standard skin tone (white), the Harvard Sentences for standard audio (white), the “Indian Head Test Pattern” for standard broadcast image (white fetishism), and “Lenna,” an image of Lena Soderberg taken from Playboy magazine (white patriarchal unconscious) that has become the reference standard image for the development of graphics processing. Each of these examples testifies to an absorption of the socio-historical at every step of mediological and computational refinement.

    More recently, as Chris Vitale, brought out in a powerful presentation on machine learning and neural networks given at Pratt Institute in 2016, Facebook’s machine has produced “Deep Face,” an image of the minimally recognizable human face. However, this ur-human face, purported to be, the minimally recognizable form of the human face turns out to be a white guy. This is a case in point of the extension of colonial relations into machine function. Given the racialization of poverty in the system of global apartheid (Federici 2012), we have on our hands (or, rather, in our machines) a new modality of automated genocide. Fascism and genocide have new mediations and may not just have adapted to new media but may have merged. Of course, the terms and names of genocidal regeimes change, but the consequences persist. Just yesterday it was called neo-liberal democracy. Today it’s called the end of neo-liberalism. The current world-wide crisis in migration is one of the symptoms of the genocidal tendencies of the most recent coalescence of the “practically” automated logistics of race, nation and class. Today racism is at once a symptom of the computational unconscious, an operation of non-conscious cognition, and still just the garden variety self-serving murderous stupidity that is the legacy of slavery, settler colonialism and colonialism.

    Thus we may observe that the statistical methods utilized by IBM to find Jews in the Shtetl are operative in Weiner’s anti-aircraft cybernetics as well as in Israel’s Iron Dome missile defense system. But, the prevailing view, even if it is not one of pure mathematical abstraction, in which computational process has its essence without reference to any concrete whatever, can be found in what follows. As an article entitled “Traces of Israel’s Iron Dome can be found in Tech Startups” for Bloomberg News almost giddily reports:

    The Israeli-engineered Iron Dome is a complex tapestry of machinery, software and computer algorithms capable of intercepting and destroying rockets midair. An offshoot of the missile-defense technology can also be used to sell you furniture. (Coppola 2014)[8]

    Not only is war good computer business, it’s good for computerized business. It is ironic that te is likened to a tapestry and now used to sell textiles – almost as if it were haunted by Lisa Nakamura’s recent findings regarding the (forgotten) role of Navajo women weavers in the making of early transistor’s for Silicon Valley legend and founding father, as well as infamous eugenicist, William Shockley’s company Fairchild.[9] The article goes on to confess that the latest consumer spin-offs that facilitate the real-time imaging of couches in your living room capable of driving sales on the domestic fronts exist thanks to the U. S. financial support for Zionism and its militarized settler colonialism in Palestine. “We have American-backed apartheid and genocide to thank for being able to visualize a green moderne couch in our very own living room before we click “Buy now.”” (Okay, this is not really a quotation, but it could have been.)

    Census, statistics, informatics, cryptography, war machines, industry standards, markets—all management techniques for the organization of otherwise unruly humans, sub-humans, posthumans and nonhumans by capitalist society. The ethos of content indifference, along with the encryption of social difference as both mode and means of systemic functionality is sustainable only so long as derivative human beings are themselves rendered as content providers, body and soul. But it is not only tech spinoffs from the racist war dividends we should be tracking. Wendy Chun (2004, 26-51) has shown in utterly convincing ways that the gendered history of the development of computer programming at ENIAC in which male mathematicians instructed female programmers to physically make the electronic connections (and remove any bugs) echoes into the present experiences of sovereignty enjoyed by users who have, in many respects, become programmers (even if most of us have little or no idea how programming works, or even that we are programming).

    Chun notes that “during World War II almost all computers were young women with some background in mathematics. Not only were women available for work then, they were also considered to be better, more conscientious computers, presumably because they were better at repetitious, clerical tasks” (Chun 2004, 33)  One could say that programming became programming and software became software when commands shifted from commanding a “girl” to commanding a machine. Clearly this puts the gender of the commander in question.

    Chun suggests that the augmentation of our power through the command-control functions of computation is a result of what she calls the “Yes sir” of the feminized operator—that is, of servile labor (2004). Indeed, in the ENIAC and other early machines the execution of the operator’s order was to be carried out by the “wren” or the “slave.” For the desensitized, this information may seem incidental, a mere development or advance beyond the instrumentum vocale (the “speaking tool” i.e., a roman term for “slave”) in which even the communicative capacities of the slave are totally subordinated to the master. Here we must struggle to pose the larger question: what are the implications for this gendered and racialized form of power exercised in the interface? What is its relation to gender oppression, to slavery? Is this mode of command-control over bodies and extended to the machine a universal form of empowerment, one to which all (posthuman) bodies might aspire, or is it a mode of subjectification built in the footprint of domination in such a way that it replicates the beliefs, practices and consequences of  “prior” orders of whiteness and masculinity in unconscious but nonetheless murderous ways.[10] Is the computer the realization of the power of a transcendental subject, or of the subject whose transcendence was built upon a historically developed version of racial masculinity based upon slavery and gender violence?

    Andrew Norman Wilson’s scandalizing film Workers Leaving the Googleplex (2011), the making of which got him fired from Google, depicts lower class, mostly of color workers leaving the Google Mountain View campus during off hours. These workers are the book scanners, and shared neither the spaces nor the perks with Google white collar workers, had different parking lots, entrances and drove a different class of vehicles. Wilson also has curated and developed a set of images that show the condom-clad fingers (black, brown, female) of workers next to partially scanned book pages. He considers these mis-scans new forms of documentary evidence. While digitization and computation may seem to have transcended certain humanistic questions, it is imperative that we understand that its posthumanism is also radically untranscendent, grounded as it is on the living legacies of oppression, and, in the last instance, on the radical dispossession of billions. These billions are disappeared, literally utilized as a surface of inscription for everyday transmissions. The dispossessed are the substrate of the codification process by the sovereign operators commanding their screens. The digitized, rewritable screen pixels are just the visible top-side (virtualized surface) of bodies dispossessed by capital’s digital algorithms on the bottom-side where, arguably, other metaphysics still pertain. Not Hegel’s world spirit—whether in the form of Kurzweil’s singularity or Tegmark’s computronium—but rather Marx’s imperative towards a ruthless critique of everything existing can begin to explain how and why the current computational eco-system is co-functional with the unprecedented dispossession wrought by racial computational capitalism and its system of global apartheid. Racial capitalism’s programs continue to function on the backs of those consigned to servitude. Data-visualization, whether in the form of selfie, global map, digitized classic or downloadable sound of the Big Bang, is powered by this elision. It is, shall we say, inescapably local to planet earth, fundamentally historical in relation to species emergence, inexorably complicit with the deferral of justice.

    The Global South, with its now world-wide distribution, is endemic to the geopolitics of computational racial capital—it is one of its extraordinary products. The computronics that organize the flow of capital through its materials and signs also organize the consciousness of capital and with it the cosmological erasure of the Global South. Thus the computational unconscious names a vast aspect of global function that still requires analysis. And thus we sneak up on the two principle meanings of the concept of the computational unconscious. On the one hand, we have the problematic residue of amortized consciousness (and the praxis thereof) that has gone into the making of contemporary infrastructure—meaning to say, the structural repression and forgetting that is endemic to the very essence of our technological buildout. On the other hand, we have the organization of everyday life taking place on the basis of this amortization, that is, on the basis of a dehistoricized, deracinated relation to both concrete and abstract machines that function by virtue of the fact that intelligible history has been shorn off of them and its legibility purged from their operating systems. Put simply, we have forgetting, the radical disappearance and expunging from memory, of the historical conditions of possibility of what is. As a consequence, we have the organization of social practice and futurity (or lack thereof) on the basis of this encoded absence. The capture of the general intellect means also the management of the general antagonism. Never has it been truer that memory requires forgetting – the exponential growth in memory storage means also an exponential growth in systematic forgetting – the withering away of the analogue. As a thought experiment, one might imagine a vast and empty vestibule, a James Ingo Freed global holocaust memorial of unprecedented scale, containing all the oceans and lands real and virtual, and dedicated to all the forgotten names of the colonized, the enslaved, the encamped, the statisticized, the read, written and rendered, in the history of computational calculus—of computer memory. These too, and the anthropocene itself, are the sedimented traces that remain among the constituents of the computational unconscious.

    _____

    Jonathan Beller is Professor of Humanities and Media Studies and Director of the Graduate Program in Media Studies at Pratt Institute. His books include The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle (2006); Acquiring Eyes: Philippine Visuality, Nationalist Struggle, and the World-Media System (2006); and The Message Is Murder: Substrates of Computational Capital (2017). He is a member of the Social Text editorial collective..

    Back to the essay

    _____

    Notes

    [1] A reviewer of this essay for b2o: An Online Journal notes, “the phrase ‘digital computer’ suggests something like the Turing machine, part of which is characterized by a second-order process of symbolization—the marks on Turing’s tape can stand for anything, & the machine processing the tape does not ‘know’ what the marks ‘mean.’” It is precisely such content indifferent processing that the term “exchange value,” severed as it is of all qualities, indicates.

    [2] It should be noted that the reverse is also true: that race and gender can be considered and/as technologies. See Chun (2012), de Lauretis (1987).

    [3] To insist on first causes or a priori consciousness in the form of God or Truth or Reality is to confront Marx’s earlier acerbic statement against a form of abstraction that eliminates the moment of knowing from the known in The Economic and Philosophic Manuscripts of 1844,

    Who begot the first man and nature as a whole? I can only answer you: Your question is itself a product of abstraction. Ask yourself how you arrived at that question. Ask yourself it that question is not posed from a standpoint to which I cannot reply, because it is a perverse one. Ask yourself whether such a progression exists for a rational mind. When you ask about the creation of nature and man you are abstracting in so doing from man and nature. You postulate them as non-existent and yet you want me to prove them to you as existing. Now I say give up your abstraction and you will give up your question. Or, if you want to hold onto your abstraction, then be consistent, and if you think of man and nature as non-existent, then think of yourself as non-existent, for you too are surely man and nature. Don’t think, don’t ask me, for as soon as you think and ask, your abstraction from the existence of nature and man has no meaning. Or are you such an egoist that you postulate everything as nothing and yet want yourself to be?” (Tucker 1978, 92)

    [4] If one takes the derivative of computational process at a particular point in space-time one gets an image. If one integrates the images over the variables of space and time, one gets a calculated exploit, a pathway for value-extraction. The image is a moment in this process, the summation of images is the movement of the process.

    [5] See Harney and Moten (2013). See also Browne (2015), especially 43-50.

    [6] In practical terms, the Alternative Informatics Association, in the announcement for their Internet Ungovernance Forum puts things as follows:

    We think that Internet’s problems do not originate from technology alone, that none of these problems are independent of the political, social and economic contexts within which Internet and other digital infrastructures are integrated. We want to re-structure Internet as the basic infrastructure of our society, cities, education, heathcare, business, media, communication, culture and daily activities. This is the purpose for which we organize this forum.

    The significance of creating solidarity networks for a free and equal Internet has also emerged in the process of the event’s organization. Pioneered by Alternative Informatics Association, the event has gained support from many prestigious organizations worldwide in the field. In this two-day event, fundamental topics are decided to be ‘Surveillance, Censorship and Freedom of Expression, Alternative Media, Net Neutrality, Digital Divide, governance and technical solutions’. Draft of the event’s schedule can be reached at https://iuf.alternatifbilisim.org/index-tr.html#program (Fidaner, 2014).

    [7] See Beller (2016, 2017).

    [8] Coppola writes that “Israel owes much of its technological prowess to the country’s near—constant state of war. The nation spent $15.2 billion, or roughly 6 percent of gross domestic product, on defense last year, according to data from the International Institute of Strategic Studies, a U.K. think-tank. That’s double the proportion of defense spending to GDP for the U.S., a longtime Israeli ally. If there’s one thing the U.S. Congress can agree on these days, it’s continued support for Israel’s defense technology. Legislators approved $225 million in emergency spending for Iron Dome on Aug. 1, and President Barack Obama signed it into law three days later.”

    [9] Nakamura (2014).

    [10] For more on this, see Eglash (2007).

    _____

    Works Cited

    • Althusser, Louis. 1977. Lenin and Philosophy. London: NLB.
    • Azoulay, Ariella. 2012. Civil Imagination: A Political Ontology of Photography. London: Verso.
    • Beller, Jonathan. 2006. Cinematic Mode of Reproduction. Hanover, NH: Dartmouth College Press.
    • Beller, Jonathan. 2016. “Fragment from The Message is Murder.” Social Text 34:3. 137-152.
    • Beller, Jonathan. 2017. The Message is Murder: Substrates of Computational Capital. London: Pluto Press.
    • Benjamin, Walter. 1969. Illuminations. Schocken Books.
    • Black, Edwin. 2001. IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers.
    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
    • Césaire, Aimé. 1972. Discourse on Colonialism New York: Monthly Review Press.
    • Coppola, Gabrielle. 2014. “Traces of Israel’s Iron Dome Can Be Found in Tech Startups.” Bloomberg News (Aug 11).
    • Chun, Wendy Hui Kyong. 2004. “On Software, or the Persistence of Visual Knowledge,” Grey Room 18, Winter: 26-51.
    • Chun, Wendy Hui Kyong. 2012. In Nakamura and Chow-White (2012). 38-69.
    • De Lauretis, Teresa. 1987. Technologies of Gender: Essays on Theory, Film, and Fiction Bloomington, IN: Indiana University Press.
    • Dyer-Witheford, Nick. 2012. “Red Plenty Platforms.” Culture Machine 14.
    • Eglash, Ron. 2007. “Broken Metaphor: The Master-Slave Analogy in Technical Literature.” Technology and Culture 48:3. 1-9.
    • Federici, Sylvia. 2012. Revolution at Point Zero. PM Press.
    • Fidaner, Işık Barış. 2014. Email broadcast on ICTs and Society listserv (Aug 29).
    • Flusser, Vilém. 2000. Towards a Philosophy of Photography. London: Reaktion Books.
    • Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge: Harvard University Press.
    • Gordon-Reed, Annette. 1998. Thomas Jefferson and Sally Hemmings: An American Controversy. Charlottesville: University of Virginia Press.
    • Harney, Stefano and Fred Moten. 2013. The Undercommons: Fugitive Planning and Black Study. Brooklyn: Autonomedia.
    • Hayles, Katherine N. 2016. “The Cognitive NonConscious.” Critical Inquiry 42:4. 783-808.
    • Jameson, Fredric. 1981. The Political Unconscious: Narrative as a Socially Symbolic Act. Ithaca: Cornell University Press.
    • Hofstadter, Douglas. 1979. Godel, Escher, Bach: An Eternal Golden Braid. New York: Penguin Books.
    • Kubrick, Stanley, dir. 1968. 2001: A Space Odyssey. Film.
    • Kubrick, Stanley, dir. 1971. A Clockwork Orange. Film.
    • Liu, Lydia He. 2010. The Freudian Robot: Digital Media and the Future of the Unconscious. Chicago: University of Chicago Press.
    • Luhmann, Niklas. 1989. Ecological Communication. Chicago: University of Chicago Press.
    • McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In Nakamura and Chow-White (2012). 21-37.
    • Maturana Humberto and Francisco Varela. 1992. The Tree of Knowledge: The Biological Roots of Human Understanding. Shambala.
    • Malone, Dumas. 1948. Jefferson The Virginian. Boston: Little, Brown and Company.
    • Marx, Karl and Fredrick Engels. 1986. Collected Works, Vol. 28, New York: International Publishers.
    • Marx, Karl and Fredrick Engels. 1978. The German Ideology. In The Marx Engels Reader, 2nd edition. Edited by Robert C. Tucker. NY: Norton.
    • Mulvey, Laura. 1975. “Visual Pleasure and Narrative Cinema.” Screen 16:3. 6-18.
    • Nakamura, Lisa. 2014. “Indigenous Circuits.” Computer History Museum (Jan 2).
    • Nakamura, Lisa and Peter A. Chow-White, eds. 2012. Race After the Internet. New York: Routledge.
    • Siegert, Bernhard. 2013.“Cultural Techniques: Or the End of Intellectual Postwar Ear in German Media Theory.” Theory Culture and Society 30. 48-65.
    • Shannon, Claude. 1948 “A Mathematical Theory of Communication.” The Bell System Technical Journal. July: 379-423; October: 623-656.
    • Shannon, Claude and Warren Weaver. 1971. The Mathematical Theory of Communication. Chicago: University of Illinois Press.
    • Rafael, Vicente. 2000. White Love: And Other Events in Filipino History. Durham: Duke University Press.
    • Ramjit, Alana. 2014. “The Lorem Ipsum Project.” Unpublished Manuscript.
    • Rebooting the Cosmos: Is the Universe the Ultimate Computer? [Replay].” 2011. “In-Depth Report: The World Science Festival 2011: Encore Presentations and More.” Scientific American.
    • Robinson, Cedric. 1983. Black Marxism: The Making of the Black Radical Tradition. Chapel Hill: The University of North Carolina Press.
    • Tadiar, Neferti. 2016. “City Everywhere.” Theory, Culture and Society 33:7-8. 57-83.
    • Virno, Paolo. 2004. A Grammar of the Multitude. New York: Semiotext(e).
    • Vismann, Cornelia. 2013. “Cultural Techniques and Sovereignty.” Theory, Culture and Society 30. 83-93.
    • Weiner, Norbert. 1989. The Human Use of Human Beings: Cybernetics and Society. London: Free Association Books.
    • Wilson, Andrew Norman, dir. 2011. “Workers Leaving the Googleplex.” Video.

     

  • Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling — Origin Stories in the Genealogy of Cherokee Language Technology

    Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling — Origin Stories in the Genealogy of Cherokee Language Technology

    Joseph Erb, Joanna Hearne, and Mark Palmer with Durbin Feeling [*]

    The intersection of digital studies and Indigenous studies encompasses both the history of Indigenous representation on various screens, and the broader rhetorics of Indigeneity, Indigenous practices, and Indigenous activism in relation to digital technologies in general. Yet the surge of critical work in digital technology and new media studies has rarely acknowledged the centrality of Indigeneity to our understanding of systems such as mobile technologies, major programs such as Geographic Information Systems (GIS), digital aesthetic forms such as animation, or structural and infrastructural elements of hardware, circuitry, and code. This essay on digital Indigenous studies reflects on the social, historical, and cultural mediations involved in Indigenous production and uses of digital media by exploring moments in the integration of the Cherokee syllabary onto digital platforms. We focus on negotiations between the Cherokee Nation’s goal to extend their language and writing system, on the one hand, and the systems of standardization upon which digital technologies depend, such as Unicode, on the other.  The Cherokee syllabary is currently one of the most widely available North American Indigenous language writing systems on digital devices. As the language has become increasingly endangered, the Cherokee Nation’s revitalization efforts have expanded to include the embedding of the Cherokee syllabary in the Windows Operating System, Google search engine, Gmail, Wikipedia, Android, iPhone and Facebook.

    Figure 1. Wikipedia in Cherokee
    Figure 1. Wikipedia in Cherokee

    With the successful integration of the syllabary onto multiple platforms, the digital practices of Cherokees suggest the advantages and limitations of digital technology for Indigenous cultural and political survivance (Vizenor 2000).

    Our collaboration has resulted in a multi-voiced analysis across several essay sections. Hearne describes the ways that engaging with specific problems and solutions around “glitches” at the intersection of Indigenous and technological protocols opens up issues in the larger digital turn in Indigenous studies. Joseph Erb (Cherokee) narrates critical moments in the adoption of the Cherokee syllabary onto digital devices, drawn from his experience leading this effort at the Cherokee Nation language technology department. Connecting our conceptual work with community history, we include excerpts from an interview with Cherokee linguist Durbin Feeling—author of the Cherokee-English Dictionary and Erb’s close collaborator—about the history, challenges, and possibilities of Cherokee language technology use and experience. In the final section, Mark Palmer (Kiowa) presents an “indigital” framework to describe a range of possibilities in the amalgamations of Indigenous and technological knowledge systems (2009, 2012). Fragmentary, contradictory, and full of uncertainties, indigital constructs are hybrid and fundamentally reciprocal in orientation, both ubiquitous and at the same time very distant from the reality of Indigenous groups encountering the digital divide.

    Native to the Device

    Indigenous people have always been engaged with technological change. Indigenous metaphors for digital and networked space—such as the web, the rhizome, and the river—describe longstanding practices of mnemonic retrieval and communicative innovation using sign systems and nonlinear design (Hearne 2017). Jason Lewis describes the “networked territory” and “shared space” of digital media as something that has “always existed for Aboriginal people as the repository of our collected and shared memory. That hardware technology has made it accessible through a tactile regime in no way diminishes its power as a spiritual, cosmological, and mythical ‘realm’” (175). Cherokee scholar (and former programmer) Brian Hudson includes Sequoyah in a genealogy of Indigenous futurism as a representative of “Cherokee cyberpunk.” While retaining these scholars’ understanding of the technological sophistication and adaptability of Indigenous peoples historically and in the present, taking up a heuristic that recognizes the problems and disjunction between Indigenous knowledge and digital development also enables us to understand the challenges faced by communities encountering unequal access to computational infrastructures such as broadband, hardware, and software design. Tracing encounters between the medium specificity of digital devices and the specificity of Indigenous epistemologies returns us to the incommensurate purposes of the digital as both a tool for Indigenous revitalization and as a sociopolitical framework that makes users do things according to a generic pattern.

    The case of the localization of Cherokee on digital devices offers insights into the paradox around the idea of the “digital turn” explored in this b2o: An Online Journal special issue—that on the one hand, the digital turn “suggests that the objects of our world are becoming better versions of themselves. On the other hand, it suggests that these objects are being transformed so completely that they are no longer the things they were to begin with.” While the former assertion is reflected in the techno-positive orientation of much news coverage of the Cherokee adoption on the iPhone (Evans 2011) as well as other Indigenous initiatives such as video game production (Lewis 2014), the latter description of transformation beyond recognizable identity resembles the goals of various historical programs of assimilation, one of the primary “logics of elimination” that Patrick Wolfe identifies in his seminal essay on settler colonialism.

    The material, representational, and participatory elements of digital studies have particular resonance in Indigenous studies around issues of land, language, political sovereignty, and cultural practice. In some cases the digital realm hosts or amplifies the imperial imaginaries pre-existing in the mediascape, as Jodi Byrd demonstrates in her analyses of colonial narratives—narratives of frontier violence in particular—normalized and embedded in the forms and forums of video games (2015). Indigeneity is also central to the materialities of global digitality in the production and dispensation of the machines themselves. Internationally, Indigenous lands are mined for minerals to make hardware and targeted as sites for dumping used electronics. Domestically in the United States, Indigenous communities have provided the labor to produce delicate circuitry (Nakamura 2014), even as rural, remote Indigenous communities and reservations have been sites of scarcity for digital infrastructure access (Ginsburg 2008). Indigenous communities such as those in the Cherokee Nation are rightly on guard against further colonial incursions, including those that come with digital environments. Communities have concerns about language localization projects: how are we going to use this for our own benefit? If it’s not for our benefit, then why not compute in the colonial language? Are they going to steal our medicine? Is this a further erosion of what we have left?

    Lisa Nakamura (2013) has taken up the concept of the glitch as a way of understanding online racism, first as it is understood by some critics as a form of communicative failure or “glitch racism,” and second as the opposite, “not as a glitch but as part of the signal,” an “effect of internet on a technical level” that comprises “a discursive act in itself, not an obstruction to that act.”  In this article we offer another way of understanding the glitch as a window onto the obstacles, refusals, and accommodations that take place at an infrastructural level in Indigenous negotiations of the digital. Olga Goriunova and Alexei Shulgin define “glitch” as “an unpredictable change in the system’s behavior, when something obviously goes wrong” (2008, 110).

    A glitch is a singular dysfunctional event that allows insight beyond the customary, omnipresent, and alien computer aesthetics. A glitch is a mess that is a moment, a possibility to glance at software’s inner structure, whether it is a mechanism of data compression or HTML code. Although a glitch does not reveal the true functionality of the computer, it shows the ghostly conventionality of the forms by which digital spaces are organized. (114)

    Attending to the challenges that arise in Indigenous-settler negotiations of structural obstacles—the work-arounds, problem-solving, false starts, failures of adoption—reveals both the adaptations summoned forth by the standardization built into digital platforms and the ways that Indigenous digital activists have intervened in digital homogeneity. By making visible the glitches—ruptures and mediations of rupture—in the granular work of localizing Cherokee, we arrive again and again at the cultural and political crossroads where Indigenous boundaries become visible within infrastructures of settler protocol (Ginsburg 1991). What has to be done, what has to be addressed, before Cherokee speakers can use digital devices in their own language and their own writing system, and what do those obstacles reveal about the larger orientation of digital environments? In particular, new digital platforms channel adaptations towards the bureaucratization of language, dictating the direction of language change through conventions like abbreviations, sorting requirements, parental controls and autocorrect features.

    Within the framework of computational standardization, Indigenous distinctiveness—Indigenous sovereignty itself—becomes a glitch. We can see instantiations of such glitches arising from moments of politicized refusal, as defined by Mohawk scholar Audra Simpson’s insight that “a good is not a good for everyone” (1). Yet we can also see moments when Indigenous refusals “to stop being themselves” (2) lead to strategies of negotiation and adoption, and even, paradoxically, to a politics of accommodation (itself a form of agency) in the uptake of digital technologies. Michelle Raheja takes up the intellectual and aesthetic iterations of sovereignty to theorize Indigenous media production in terms of “visual sovereignty,” which she defines as “the space between resistance and compliance” within which Indigenous media-makers “revisit, contribute to, borrow from, critique, and reconfigure” film conventions, while still “operating within and stretching the boundaries of those same conventions” (1161). We suggest that like Indigenous self-representation on screen, Indigenous computational production occupies a “space between resistance and compliance,” a space which is both sovereigntist and, in its lived reality at the intersection of software standardization and Indigenous language precarity, glitchy.

    Our methodology, in the case study of Cherokee language technology development that follows, might be called “glitch retrieval.”  We focus on pulse points, moments, stories and small landmarks of adaptation, accommodation, and refusal in the adoption of Sequoyah’s Cherokee syllabary to mobile digital devices. In the face of the wave of publicity around digital apps (“there’s an app for that!”), the story of the Cherokee adoption is not one of appendage in the form of downloadable apps but rather the localization of the language as “native to the device.” Far from being a straightforward development, the process moved in fits and starts, beset with setbacks and surprises, delineating unique minority and endangered Indigenous language practices within majoritarian protocols. To return to Goriunova and Shulgin’s definition, we explore each glitch as an instance of “a mess” that is also “a moment, a possibility,” one that “allows insight” (2008). Each of the brief moments narrated below retrieves an intersection of problem and solution that reveals Indigenous presence as well as “the ghostly conventionality of the forms by which digital spaces are organized” (114). Retrieving the origin stories of Cherokee language technology—the stories of the glitches—gives us new ways to see both the limits of digital technology as it has been imagined and built within structures of settler colonialism, and the action and shape of Indigenous persistence through digital practices.

    Cherokee Language Technology and Mobile Devices

    Each generation is crucial to the survival of Indigenous languages. Adaptation, and especially adaptation to new technologies, is an important factor in Indigenous language persistence (Hermes et al 2016). The Cherokee, one of the largest of the Southeast tribes, were early adopters of language technologies, beginning with the syllabary writing system developed by Sequoyah between 1809 and 1820 and presented to the Cherokee Council in 1821. The circumstances of the development of the Cherokee syllabary are nearly unique in that 1) the writing system originated from the work of one man, and in the space of a single decade; and 2) in the fact that it was initiated and ultimately widely adopted from within the Indigenous community itself rather than being developed and introduced by non-Native missionaries, linguists or other outsiders.

    Unlike alphabetic writing based on individual phonemes, a syllabary consists of written symbols indicating whole syllables, which can be more easily developed and learned than alphabetic systems due to the stability of each syllable sound. The Cherokee Syllabary system uses written characters that represent consonant and vowel sounds, such as “Ꮉ”, which is the sound of “ma,” and Ꮀ, for the sound “ho.” The original writing of Sequoyah was done with a quill and pen, an inking process that involved cursive characters, but this handwritten orthography gave way to a block print character set for the Cherokee printing press (Cushman 2011). The Cherokee Phoenix was the first Native American newspaper in the Americas, published in Cherokee and English beginning in 1828. Since then, Cherokee people have adapted their language and writing system early and often to new technologies, from typewriters to dot matrix printers. This historical adaptation includes a millennial transformation from technologies that required training to access machines like specially-designed typewriters with Cherokee characters, to the embedding of the syllabary as a standard feature on all platforms for commercially available computers and mobile devices. Very few Indigenous languages have this level of computational integration—in part because very few Indigenous languages have their own writing systems—and the historical moments we present here in the technologization of the Cherokee language illustrate both problems and possibilities of language diversity in standardization-dependent platforms. In the following section, we offer a community-based history of Cherokee language technology in stories of the transmission of knowledge between two generations—Cherokee linguist Durbin Feeling, who began teaching and adapting the language in the 1960s, and Joseph Erb, who worked on digital language projects starting in the early 2000s—focusing on shifts in the uptake of language technology.

    In the early and mid-twentieth century, churches in the Cherokee Nation were among the sites for teaching and learning Cherokee literacy. Durbin Feeling grew up speaking Cherokee at home, and learned to read the language as a boy by following along as his father read from the Cherokee New Testament. He became fluent in writing the language while serving in the US military in Vietnam, when he would read the Book of Psalms in Cherokee. His curiosity about the language grew as he continued to notice the differences between the written Cherokee usage of the 1800s—codified in texts like the New Testament—and the Cherokee spoken by his community in the 1960s. Beginning with the bilingual program at Northeastern University (translating syllabic writing into phonetic writing), Feeling worked on Cherokee language lessons and a Cherokee dictionary, for which he translated words from a Webster’s dictionary, on handwritten index cards, to a recorder. Feeling recalls that in the early 1970s,

    Back then they had reel to reel recorders and so I asked for one of those and talked to anybody and everybody and mixed groups, men and women, men with men, women with women. Wherever there were Cherokees, I would just walk up and say do you mind if I just kind of record while you were talking, and they didn’t have a problem with that. I filled up those reel to reel tapes, five of them….I would run it back and forth every word, and run it forward and back again as many times as I had to, and then I would hand write it on a bigger card.

    So I filled, I think, maybe about five of those in a shoe box and so all I did was take the word, recorded it, take the next word, recorded it, and then through the whole thing…

    There was times the churches used to gather and cook some hog meat, you know. It would attract the people and they would just stand around and joke and talk Cherokee. Women would meet and sew quilts and they’d have some conversations going, some real funny ones. Just like that, you know? Whoever I could talk with. So when I got done with that I went back through and noticed the different kinds of sounds…the sing song kind of words we had when we pronounced something (Erb and Feeling 2016).

    The project began with handwriting in syllabary, but the dictionary used phonetics with tonal markers, so Feeling went through each of five boxes of index cards again, labeling them with numbers to indicate the height of sounds and pitches.

    Feeling and his team experimented with various machines, including manual typewriters with syllabary keys (manufactured by the well-known Hermes typewriter company), new fonts using a dot matrix printer, and electric typewriters with Cherokee syllabary in the ball key—the typist had to memorize the location of all 85 keys. Early attempts to build computer programs allowing users to type in Cherokee resulted in documents that were confined to one computer and could not be easily shared except through printing documents.

    Figure 2, Typewriter keyboard in Cherokee
    Figure 2. Typewriter keyboard in Cherokee (image source: authors)

    Beginning around 1990, a number of linguists and programmers with interests in Indigenous languages began working with the Cherokee, including Al Webster, who used Mac computers to create a program that, as Feeling described it, “introduced what you could do with fonts with a fontographer—he’s the one who made those fonts that were just like the old print, you know way back in the eighteen hundreds.” Then in the mid-1990s Michael Everson began working with Feeling and others to integrate Cherokee glyphs into Unicode, the primary system for software internationalization. Arising from discussions between engineers at Apple and Xerox, Unicode began in late 1987 as a project to standardize languages for computation. Although the original goal of Unicode was to encode all world writing systems, major languages came first. Michael Everson’s company Evertype has been critical to broader language inclusion, encoding minority and Indigenous languages such as Cherokee, which was added to the Unicode Standard in 1999 with the release of version 3.0.

    Having begun language work with handwritten index cards in the 1960s, and later typewriters available to only one or two people with specialized skills, Feeling saw Cherokee adopted into Unicode in 1999, and integrated into Apple computer operating systems in 2003. When Apple and the Cherokee Nation publicized the new localization of Cherokee on the 4.1 iPhone in December 2010, the story was picked up internationally, as well as locally among Cherokee communities. By 2013, users could text, email, and search Google in the syllabary on smartphones and laptops, devices that came with the language already embedded as a standardized feature and that were available at chain stores like Walmart. This development involved different efforts at multiple locations, sometimes simultaneously, and over time. While Apple adopted Unicode-compliant Cherokee glyphs to the Macintosh in 2003, the Cherokee Nation, as a government entity, used PC computers rather than Macs. PCs had yet to implement Unicode-compliant Cherokee Fonts, so there was little access to the writing system on their computers and no known community adoption. At the time, the Cherokee Nation was already using an adapted English font that displayed Cherokee characters but was not Unicode compliant.

    One of the first attempts to introduce Unicode-compliant Cherokee font and keyboard came with the Indigenous Language Institute conference at Northeastern State University in Oklahoma in 2006, where the Institute made the font available on flash drives and provided training to language technologists at the Cherokee Nation. However, the program was not widely adopted due to anticipated wait times in getting the software installed on Cherokee Nation computers. Further, the majority of users did not understand the difference between the new Unicode compliant fonts and the non-Unicode fonts they were already using. The non-Unicode Cherokee font and keyboard adapted the same keystrokes, and looked the same on screen as the Unicode compliant system, but certain keys (especially those for punctuation) produced glyphs that would not transfer between computers, so files could not be sent and re-opened on another computer without requiring extensive corrections. The value of Unicode compliance involves the additional interoperability to move between systems, the crucial first step towards integration with mobile devices, which are more useful in remote communities than desktop computers. Addition to Unicode is the first of five steps—including development of CLDR, open source font, keyboard layout design, and a word frequency list—before companies can encode a new language into their platforms for computer operating systems. These five steps act as space of exchange between Indigenous writing systems and digital platforms, within which differences are negotiated.

    CLDR

    The Common Local Data Repository (CLDR) is a set of key terms for localization, including months, days, years, countries, and currencies, as well as their abbreviations. This core information is localized on the iPhone and becomes the base which calendars and other native and external apps feed from on the device. Many Indigenous languages, including Cherokee, don’t have bureaucratic language, such as abbreviations for days of the week, and need to create them—Translation Department and Language Technology Department worked together to create new Cherokee abbreviations for calendrical terms.

    Figure 3. Weather in Cherokee
    Figure 3. Weather in Cherokee (image source: authors)

    Open Source Font

    Small communities don’t have budgets to purchase fonts for their languages, and such fonts also aren’t financially viable for commercial companies to develop, so the challenge for minority language activists is to find sponsorship for the creation of an open source font that will work across systems, available for anyone to adopt into any computer or device system. Working with Feeling, Michael Everson developed the open source font for Cherokee. Plantagenet font (designed by Ross Mills) was the first to adopt Cherokee into Windows (Vista) and Mac (Panther).  If there is no font on a Unicode-compliant device—that is, the device does not have the language glyphs embedded—then users will see a string of boxes, the default filler for Unicode points that are not showing up in the system.

    Keyboard Layout

    New languages need an input method, and companies generally want the most widely used versions made available in open source. Cherokee has both a QWERTY keyboard, which is a phonetically-based Cherokee language keyboard, and a “Cherokee Nation” layout using the syllabary. Digital keyboards for mobile technologies are more complicated to create than physical keyboards and involve intricate collaboration between language specialists and developers. When developing the Cherokee digital keyboard for the iPhone, Apple worked in conjunction with the Translation Department and Language Technology Department at the Cherokee Nation, experimenting with several versions to accommodate the 85 Cherokee characters in the syllabary without creating too many alternate keyboards (the Cherokee Nation’s original involved 13 keyboards, whereas English has 3). Apple ultimately adapted a keyboard that involved two different ways of typing on the same keyboard, combining pop-up keys and an autocomplete system.

    Figure 4, Mobile device keyboard in Cherokee
    Figure 4. Mobile device keyboard in Cherokee (image source: authors)

    Word Frequency List

    The word frequency list is a standard requirement for most operating systems to support autocorrect spelling and other tasks on digital devices. Programmers need a word database, in Unicode, large enough to adequately source programs such as autocomplete. In order to generate the many thousands of words needed to seed the database, the Cherokee Nation had to provide Cherokee documents typed in the Unicode version of the language. But as with other languages, there were many older attempts to embed Cherokee in typewriters and computers that predate Unicode, leading to a kind of catch 22: The Cherokee Nation needed to already have documents produced in Unicode in order to get the language into computer and operating systems and adopted for mobile technologies, but they didn’t have many documents in Unicode because the language hadn’t yet been integrated into those Unicode-compliant systems. In the end the CN employed Cherokee speakers to create new documents in Unicode—re-typing the Cherokee Bible and other documents—to create enough words for a database. Their efforts were complicated by the existence of multiple versions of the language and spelling, and previous iterations of language technology and infrastructure.

    Translation

    Many of the English language words and phrases that are important to computational concepts, such as “security,” don’t have obvious equivalents in Cherokee (or as Feeling said, “we don’t have that”). How does one say “error message” in Cherokee? The CN Translation Department invented words—striving for both clarity and agreement—in order to address coding concepts for operating systems, error messages, and other phrases (which are often confusing even in English) as well as more general language such as the abbreviations discussed above. Feeling and Erb worked together with elders, CN staff, and professional Cherokee translators to invent descriptive Cherokee words for new concepts and technologies, such as ᎤᎦᏎᏍᏗ (u-ga-ha-s-di) or “to watch over something” for security; ᎦᎵᏓᏍᏔᏅ ᏓᎦᏃᏣᎳᎬᎯ (ga-li-da-s-ta-nv da-ga-no-tsa-la-gv-hi) or “something is wrong” for error message; ᎠᎾᎦᎵᏍᎩ ᎪᏪᎵ (a-na-ga-li-s-gi go-we-li) or “lightning paper” for email; and ᎠᎦᏙᎥᎯᏍᏗ ᎠᏍᏆᏂᎪᏗᏍᎩ (a-ga-no-v-hi-s-di a-s-qua-ni-go-di-s-gi) or “knowledge keeper” for computers. For English words like “luck” (as in “I’m feeling lucky,” a concept which doesn’t exist in Cherokee), they created new idioms, such as “ᎡᎵᏊ ᎢᎬᏱᏊ ᎠᏆᏁᎵᏔᏅ ᏯᏂᎦᏛᎦ” (e-li-quu i-gv-yi-quu a-qua-ne-li-ta-na ya-ni-ga-dv-ga) or “I think I’ll find it on the first try.”

    Sorting

    When the Unicode-compliant Plantagenet Cherokee font was first introduced in Microsoft Windows OS in Vista (2006), the company didn’t add Cherokee to the sorting function (the ability to sort files by numeric or alphabetic order) in its system. When Cherokee speakers named files in the language, they arrived at the limits of the language technology. These limits determine parameters in a user’s personal computing, the point at which naming files in Cherokee or keeping a computer calendar in Cherokee become forms of language activism that reveal the underlying dominance of English in the deeper infrastructure of computational systems. When a user sent a file with Cherokee characters, such as “ᏌᏊ” (sa-quu, or “one”) and “ᏔᎵ” (ta-li or “two”), receiving computers could not put the file into one place or another because the core operating system had no sorting order for the Unicode points of Cherokee, and the computer would crash. Sorting orders in Cherokee were not added to Microsoft until Windows 8.

    Parental Controls

    Part of the protocol for operating systems involves standard protections like parental controls—the ability to enable a program to automatically censor inappropriate language. In order to integrate Cherokee into an OS, the company needed lists of offensive language or “curse words” that could be flagged in parental restrictions settings for their operating system. Meeting the needs of these protocols was difficult linguistically and culturally, because Cherokee does not have the same cultural taboos as English around words for sexual acts or genitals; most Cherokee words are “clean words,” with offensive speech communicated through context rather than the words themselves. Also, because the Cherokee language involves tones, inappropriate meanings can arise from alternate tonal emphases (and the tone is not reflected in the syllabary). Elder Cherokee speakers found it culturally difficult to speak aloud those elements of Cherokee speech that are offensive, while non-Cherokee speaking computer company employees who had worked with other Indigenous languages did not always understand that not all Indigenous languages are alike—“curse words” in one language are not inappropriate in others. Finally, almost all of the potentially offensive Cherokee words that certain technology companies sought not only did not carry the same offensive connotation as its translation in English, but also carried dual or multiple meanings, and if blocked would also block a common word that had no inappropriate meaning.

    Mapping and Place Names

    One of the difficulties for Cherokees working to create Cherokee language country names and territories was the Cherokee Nation’s own exclusion from the lists. Speakers translated the names of even tiny nations into Cherokee for lists and maps in which the Cherokee Nation itself did not appear. Discussion of terminologies for countries and territories were frustrating because Cherokee themselves were not included, making colonial erasure of Indigenous nationhood and territories visible to Cherokee speakers as they did the translations. Erb is currently working with Google Maps to revise their digital maps to show federally recognized tribal nations’ territories.

    Passwords and Security

    One of the first attempts to introduce Unicode-compliant Cherokee on computers for the Immersion School, ᏣᎳᎩ ᏧᎾᏕᎶᏆᏍᏗ (tsa-la-gi tsu-na-de-lo-qua-s-di), involved problems and glitches that temporarily set back adoption of Unicode systems. The CN Language Technology Department added the Unicode-compliant font and keyboards on an Immersion School curriculum developer’s computer. However, at the time computers could only accept English passwords. After the curriculum developer had been typing in Cherokee and left their desk, their computer automatically logged off (auto-logoff is standard security for government computers). Temporarily locked out of their computer, they couldn’t switch their keyboard back to English to type the English password. Other teachers and translators heard about this “lockout” and most decided against having the new Unicode compliant fonts on their computers. Glitches like these slowed the roll out of Unicode-compliant fonts and set back the adoption process in the short term.

    Community Adoption

    When computers began to enter Cherokee communities, Feeling recalls his own hesitation about social media sites like Facebook: “I was afraid to use that.” When in 2011 there was a contested election for Chief of the Nation, and social media provided faster updates than traditional media, many community members signed up for Facebook accounts so they could keep abreast of the latest news about the election.

    Figure 5, Facebook in Cherokee
    Figure 5. Facebook in Cherokee (image source: authors)

    Similarly, when Cherokee first became available on the iPhone 4.1, many Cherokee people were reluctant to use it. Feeling says he was “scared that it wouldn’t work, like people would get mad or something.” But older speakers wanted to communicate with family members in Cherokee, and they provided the pressure for others to begin using mobile devices in the language. Feeling’s older brother, also a fluent speaker, bought an iPhone just to text with his brother in Cherokee, because his Android phone wouldn’t properly display the language.

    In 2009, the Cherokee Nation introduced Macintosh computers in a 1:1 computer-to-student ratio for the second and third grades of the Cherokee Immersion school, and gave students air cards to get wireless internet service at home through cell towers (because internet was unavailable in many rural Cherokee homes). Up to this point the students spoke in Cherokee at school, but rarely generalized their Cherokee language outside of school or spoke it at home. With these tools, students could—and did—get on FaceTime and iChat from home and in other settings to talk with classmates in Cherokee. For some parents, it was the first time they had heard their children speaking Cherokee at home. This success convinced many in the community of the worth of Cherokee language technologies for digital devices.

    The ultimate community adoption of Cherokee in digital forms—computers, mobile devices, search engines and social media—came when the technologies were most applicable to community needs. What worked was not clunky modems for desktops but iPhones that could function in communities without internet infrastructure. The story of Cherokee adoption into digital devices illustrates the pull towards English-language structures of standardization for Indigenous and minority language speakers, who are faced with challenges of skill acquisition and adaptation; language development histories that involve versions of orthographies, spellings, neologisms and technologies; and problems of abstraction from community context that accompany codifying practices. Facing the precarity of an eroding language base and the limitations and possibilities digital devices, the Cherokee and other Indigenous communities have strategically adapted hardware and software for cultural and political survivance. Durbin Feeling describes this adaptation as a Cherokee trait: “It’s the type of people that are curious or are willing to learn. Like we were in the old times, you know? I’m talking about way back, how the Cherokees adapted to the English way….I think it’s those kind of people that have continued in a good way to use and adapt to whatever comes along, be it the printing press, typewriters, computers, things like that. … Nobody can take your language away. You can give it away, yeah, or you can let it die, but nobody can take it away.”

    Indigital Frameworks

    Our case study reveals important processes in the integration of Cherokee knowledge systems with the information and communication technologies that have transformed notions of culture, society and space (Brey 2003). This kind of creative fusion is nothing new—Indigenous peoples have been encountering and exchanging with other peoples from around the world and adopting new materials, technologies, ideas, standards, and languages to meet their own everyday needs for millennia. The emerging concept indigital describes such encounters and collisions between the digital world and Indigenous knowledge systems, as highlighted in The Digital Arts and Humanities (Travis and von Lünen 2016). Indigital describes the hybrid blending or amalgamation of Indigenous knowledge systems including language, storytelling, calendar making, and song and dance, with technologies such as computers, Internet interfaces, video, maps, and GIS (Palmer 2009, 2012, 2013, 2016). Indigital constructs are forms of what Bruno Latour calls technoscience (1987), the merging of science, technology, and society—but while Indigenous peoples are often left out of global conversations regarding technoscience, the indigital framework attempts to bridge such conversations.

    Indigital constructs exist because knowledge systems like language are open, dynamic, and ever-changing; are hybrid as two or more systems mix, producing a third; require the sharing of power and space which can lead to reciprocity; and are simultaneously everywhere and nowhere (Palmer 2012). Palmer associates indigital frameworks with Indigenous North Americans and the mapping of Indigenous lands by or for Indigenous peoples using maps and GIS (2009; 2012; 2016). GIS is a digital mapping and database software used for collecting, manipulating, analyzing, and mapping various spatial phenomena. Indigenous language, place-names, and sacred sites often converge with GIS resulting in indigital geographic information networks. The indigital framework, however, can be applied to any encounter and exchange involving Indigenous peoples, technologies, and cultures.

    First, indigital constructs emerge locally, often when individuals or groups of individuals adopt and experiment with culture and technology within spaces of exchange, as happens in the moments of challenge and success in the integration of Cherokee writing systems to digital devices outlined in this essay. Within spaces of exchange, cultural systems like language and technology do not stand alone as dichotomous entities. Rather, they merge together creating multiplicity, uncertainty, and hybridization. Skilled humans, typewriters, index cards, file cabinets, language orthographies, Christian Bibles, printers, funding sources, transnational corporations, flash drives, computers, and cell-phones all work to stabilize and mobilize the digitization of the Cherokee language. Second, indigital constructs have the potential to flow globally; Indigenous groups and communities tap into power networks constructed by global transnational corporations, like Apple, Google, or IBM. Apple and Google are experts at creating standardized computer designs while connecting with a multitude of users. During negotiations with Indigenous communities, digital technologies are transformative and can be transformed. Finally, indigital constructs introduce different ways that languages can be represented, understood, and used. Differences associated with indigital constructs include variations in language translations, multiple meanings of offensive language, and contested place-names. Members of Indigenous communities have different experiences and reasons for adopting or rejecting the use of indigital constructs in the form of select digital devices like personal computers and cell-phones.

    One hopeful aspect in this process is the fact that Indigenous knowledge systems and digital technologies are combinable. The idea of combinability is based on the convergent nature of digital technologies and the creative intention of the artist-scientist. In fact, electronic technologies enable new forms from such combinations, like Cherokee language keyboards, Kiowa story maps and GIS, or Maori language dictionaries. Digital recordings of community members or elders telling important stories that hold lessons for future generations are becoming more widely available, made either using audio or visual devices or combination of both formats. Digital prints of maps can be easily carried to roundtables for discussion about the environment (Palmer 2016), with audiovisual images edited on digital devices and uploaded or downloaded to other digital devices and eventually connected to websites. The mapping of place-names, creation of Indigenous language keyboards, and integration of stories into GIS require standardization, yet those standards are often defined by technocrats far removed from Indigenous communities, with a lack of input from community members and elders. Whatever the intention of the elders telling the story or the digital artist creating the construction, this is an opportunity for the knowledge system and its accompanying information to be shared.

    Ultimately, how do local negotiations on technological projects influence final designs and representations? Indigital constructions (and spaces) are hybrid and require mixing at least two things to create a new third construct or third space (Bhabha 2006). Creation of a new Cherokee bureaucratic language to meet the needs of the iPhone CLDR requirements for representing calendar elements, with the negotiations between Cherokee language specialists and computer language specialists, resulted in hybrid space-times; a hybrid calendar shared as a form Cherokee-constructed technoscience. The same process applied to the development of specialized and now standardized Cherokee fonts and keyboards for the iPhone. A question for future research might be how much Unicode standardization transforms the Cherokee language in terms of meaning and understanding. What elements of Cherokee are altered and how are the new constructs interpreted by community members? How might Cherokee fonts and keyboards contribute to the sustainability of Indigenous culture and put language into practice?

    Survival of indigital constructs requires reciprocity between systems. Indigital constructions are not set up as one-way flows of knowledge and information. Rather, indigital constructions are spaces for negotiation, featuring the ideas and thoughts of the participants. Reciprocity in this sense means cross-cultural exchange on equal footing, as having too much power will consume any kind of rights-based approach to building bridges among all participants. One-way flows of knowledge are revealed when Cherokee or other Indigenous informants providing place-names to Apple, Microsoft, or Google realize that their own geographies are not represented. They are erased from the maps. Indigenous geographies are often trivialized as being local, vernacular, and particular to a culture which goes against the grain of technoscience standardization and universalization. The trick of indigital reciprocity is shared power, networking (Latour 2005), assemblages (Deleuze and Guattari 1988), decentralization, trust, and collective responsibility. If all these relations are in place, rights-based approaches to community problems have a chance of success.

    Indigital constructions are everywhere—Cherokee iPhone language applications or Kiowa stories in GIS are just a few examples, and many more occur in film, video, and other digital media types not discussed in this article. Yet, ironically, indigital constructions are also very distant from the reality of many Indigenous people on a global scale. Indigital constructions are primarily composed in the developed world, especially what is referred to as the global north. There is still a deep digital divide among Indigenous peoples and many Indigenous communities do not have access to digital technologies. How culturally appropriate are digital technologies like video, audio recordings, or digital maps? The indigital is distant in terms of addressing social problems within Indigenous communities. Oftentimes, there is a fear of the unknown in communities like the one described by Durbin Feeling in reference to adoption of social media applications like Facebook. Some Indigenous communities consider carefully the implications of adopting social media or language applications created for community interactions. Adoption may be slow, or not meet the expectations of software developers. Many questions arise in this process. Do creativity and social application go hand in hand? Sometimes we struggle to understand how our work can be applied to everyday problems. What is the potential of indigital constructions being used for rights-based initiatives?

    Conclusion

    English-speakers don’t often pause to consider how their language comes to be typed, displayed, and shared on digital devices. For Indigenous communities, the dominance of majoritarian languages on digital devices has contributed to the erosion of their language. While the isolation of many Indigenous communities in the past helped to protect their languages, that same isolation has required incredible efforts for minority language speakers to assert their presence in the infrastructures of technological systems. The excitement over the turn to digital media in Indian country is an easy story to tell to a techno-positive public, but in fact this turn involves a series of paradoxes: we take materials out of Indigenous lands to make our devices, and then we use them to talk about it; we assert sovereignty within the codification of standardized practices; we engage new technologies to sustain Indigenous cultural practices even as technological systems demand cultural transformation. Such paradoxes get to the heart of deeper questions about culturally-embedded technologies, as the modes and means of our communication shift to the screen. To what extent does digital media re-make the Indigenous world, or can it function just as a tool? Digital media are functionally inescapable and have come to constitute elements of our self-understanding; how might such media change the way Indigenous participants understand the world, even as they note their own absences from the screen? The insights from the technologization of Cherokee writing engage us with these questions along with closer insights into multiple forms of Indigenous information and communications technology and the emergence of indigital creations, inventing the next generation of language technology.

    _____

    Joseph Lewis Erb is a computer animator, film producer, educator, language technologist and artist enrolled in the Cherokee Nation. He earned his MFA from the University of Pennsylvania, where he created the first Cherokee animation in the Cherokee language, “The Beginning They Told.” He has used his artistic skills to teach Muscogee Creek and Cherokee students how to animate traditional stories. Most of this work is created in the Cherokee Language, and he has spent many years working on projects that will expand the use of Cherokee ​​language in technology and the arts. Erb is an assistant professor at the University of Missouri, teaching digital storytelling and animation.

    Joanna Hearne is associate professor in the English Department at the University of Missouri, where she teaches film studies and digital storytelling. She has published a number of articles on Indigenous film and digital media, animation, early cinema, westerns, and documentary, and she edited the 2017 special issue of Studies in American Indian Literatures on “Digital Indigenous Studies: Gender, Genre and New Media.” Her two books are Native Recognition: Indigenous Cinema and the Western (SUNY Press, 2012) and Smoke Signals: Native Cinema Rising (University of Nebraska Press, 2012).

    Mark H. Palmer is associate professor in the Department of Geography at the University of Missouri who has published research on institutional GIS and the mapping of Indigenous territories. Palmer is a member of the Kiowa Tribe of Oklahoma.

    Back to the essay

    _____

    Acknowledgements

    [*] The authors would like to thank Durbin Feeling for sharing his expertise and insights with us, and the University of Missouri Peace Studies Program for funding interviews and transcriptions as part of the “Digital Indigenous Studies” project.

    _____

    Works Cited

    • Bhabha, Homi K. and J. Rutherford. 2006. “Third Space.” Multitudes 3. 95-107.
    • Brey, P. 2003. “Theorizing Modernity and Technology.” In Modernity and Technology, edited by T.J. Misa, P. Brey, and A. Feenberg, 33-71. Cambridge: MIT Press.
    • Byrd, Jodi A. 2015. “’Do They Not Have Rational Souls?’: Consolidation and Sovereignty in Digital New Worlds.” Settler Colonial Studies: 1-15.
    • Cushman, Ellen. 2011. The Cherokee Syllabary: Writing the People’s Perseverance. Norman: University of Oklahoma Press.
    • Deleuze, Gilles, and Félix Guattari. 1988. A Thousand Plateaus: Capitalism and Schizophrenia. New York: Bloomsbury Publishing.
    • Feeling, Durbin and Joseph Erb. 2016. Interview with Durbin Feeling, Tahlequah, Oklahoma. 30 July.
    • Evans, Murray. 2011. “Apple Teams Up to Use iPhone to Save Cherokee Language.” Huffington Post (May 25).
    • Feeling, Durbin. 1975. Cherokee-English Dictionary. Tahlequah: Cherokee Nation of Oklahoma.
    • Ginsburg, Faye. 1991. “Indigenous Media: Faustian Contract or Global Village?” Cultural Anthropology 6:1. 92-112.
    • Ginsburg, Faye. 2008. “Rethinking the Digital Age.” In Global Indigenous Media: Culture, Poetics, and Politics, edited by Pamela Wilson and Michelle Stewart. Durham: Duke University Press. 287-306.
    • Goriunova, Olga and Alexei Shulgin. 2008. “Glitch.” In Software Studies: A Lexicon, edited by David Fuller. Cambridge, MA: MIT Press. 110-18.
    • Hearne, Joanna. 2017. “Native to the Device: Thoughts on Digital Indigenous Studies.” Studies in American Indian Literatures 29:1. 3-26.
    • Hermes, Mary, et al. 2016. “New Domains for Indigenous Language Acquisition and Use in the USA and Canada.” In Indigenous Language Revitalization in the Americas, edited by Teresa L. McCarty and Serafin M. Coronel-Molina. London: Routledge. 269-291.
    • Hudson, Brian. 2016. “If Sequoyah Was a Cyberpunk.” 2nd Annual Symposium on the Future Imaginary, August 5th, University of British Columbia-Okanagan, Kelowna, B.C.
    • Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press.
    • Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press.
    • Lewis, Jason. 2014. “A Better Dance and Better Prayers: Systems, Structures, and the Future Imaginary in Aboriginal New Media.” In Coded Territories: Tracing Indigenous Pathways in New Media Art, edited by Steven Loft and Kerry Swanson. Calgary: University of Calgary Press. 49-78.
    • Manovich, Lev. 2002. The Language of New Media. Cambridge, MA: MIT Press.
    • Nakamura, Lisa. 2013. “Glitch Racism: Networks as Actors within Vernacular Internet Theory.” Culture Digitally.
    • Nakamura, Lisa. 2014. “Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture.” American Quarterly 66:4. 919-941.
    • Palmer, Mark. 2016. “Kiowa Storytelling around a Map.” In Travis and von Lunen (2016). 63-73.
    • Palmer, Mark. 2013. “(In)digitizing Cáuigú Historical Geographies: Technoscience as a Postcolonial Discourse”. In History and GIS: Epistemologies, Considerations and Reflections, edited by A. von Lunen and C. Travis. Dordrecht, NLD: Springer Publishing. 39-58.
    • Palmer, Mark. 2012. “Theorizing Indigital Geographic Information Networks.“ Cartographica: The International Journal for Geographic Information and Geovisualization 47:2. 80-91.
    • Palmer, Mark. 2009. “Engaging with Indigital Geographic Information Networks.” Futures: The Journal of Policy, Planning and Futures Studies 41. 33-40.
    • Palmer, Mark and Robert Rundstrom. 2013. “GIS, Internal Colonialism, and the U.S. Bureau of Indian Affairs.” Annals of the Association of American Geographers 103:5. 1142-1159.
    • Raheja, Michelle. 2011. Reservation Reelism: Redfacing, Visual Sovereignty, and Representations of Native Americans in Film. Lincoln: University of Nebraska Press.
    • Simpson, Audra. 2014. Mohawk Interruptus: Political Life Across the Borders of Settler States. Durham: Duke University Press.
    • Travis, C. and A. von Lünen. 2016. The Digital Arts and Humanities. Basel, Switzerland: Springer.
    • Vizenor, Gerald. 2000. Fugitive Poses: Native American Indian Scenes of Absence and Presence. Lincoln: University of Nebraska Press.
    • Wolf, Patrick. 2006. “Settler Colonialism and the Elimination of the Native.” Journal of Genocide Research 8:4. 387-409.