boundary 2

Tag: Wendy Hui Kyun Chun

  • Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    a review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition (MIT Press, 2021)

    by Alexander R. Galloway

    I remember snickering when Chris Anderson announced “The End of Theory” in 2008. Writing in Wired magazine, Anderson claimed that the structure of knowledge had inverted. It wasn’t that models and principles revealed the facts of the world, but the reverse, that the data of the world spoke their truth unassisted. Given that data were already correlated, Anderson argued, what mattered was to extract existing structures of meaning, not to pursue some deeper cause. Anderson’s simple conclusion was that “correlation supersedes causation…correlation is enough.”

    This hypothesis — that correlation is enough — is the thorny little nexus at the heart of Wendy Chun’s new book, Discriminating Data. Chun’s topic is data analytics, a hard target that she tackles with technical sophistication and rhetorical flair. Focusing on data-driven tech like social media, search, consumer tracking, AI, and many other things, her task is to exhume the prehistory of correlation, and to show that the new epistemology of correlation is not liberating at all, but instead a kind of curse recalling the worst ghosts of the modern age. As Chun concludes, even amid the precarious fluidity of hyper-capitalism, power operates through likeness, similarity, and correlated identity.

    While interleaved with a number of divergent polemics throughout, the book focuses on four main themes: correlation, discrimination, authentication, and recognition. Chun deals with these four as general problems in society and culture, but also interestingly as specific scientific techniques. For instance correlation has a particular mathematical meaning, as well as a philosophical one. Discrimination is a social pathology but it’s also integral to discrete rationality. I appreciated Chun’s attention to details large and small; she’s writing about big ideas — essence, identity, love and hate, what does it mean to live together? — but she’s also engaging directly with statistics, probability, clustering algorithms, and all the minutia of data science.

    In crude terms, Chun rejects the — how best to call it — the “anarcho-materialist” turn in theory, typified by someone like Gilles Deleuze, where disciplinary power gave way to distributed rhizomes, schizophrenic subjects, and irrepressible lines of flight. Chun’s theory of power isn’t so much about tessellated tapestries of desiring machines as it is the more strictly structuralist concerns of norm and discipline, sovereign and subject, dominant and subdominant. Big tech is the mechanism through which power operates today, Chun argues. And today’s power is racist, misogynist, repressive, and exclusionary. Power doesn’t incite desire so much as stifle and discipline it. In other words George Orwell’s old grey-state villain, Big Brother, never vanished. He just migrated into a new villain, Big Bro, embodied by tech billionaires like Mark Zuckerberg or Larry Page.

    But what are the origins of this new kind of data-driven power? The reader learns that correlation and homophily, or “the notion that birds of a feather naturally flock together” (23), not only subtend contemporary social media platforms like Facebook, but were in fact originally developed by eugenicists like Francis Galton and Karl Pearson. “British eugenicists developed correlation and linear regression” (59), Chun notes dryly, before reminding us that these two techniques are at the core of today’s data science. “When correlation works, it does so by making the present and future coincide with a highly curated past” (52). Or as she puts it insightfully elsewhere, data science doesn’t so much anticipate the future, but predict the past.

    If correlation (pairing two or more pieces of data) is the first step of this new epistemological regime, it is quickly followed by some additional steps. After correlation comes discrimination, where correlated data are separated from other data (and indeed internally separated from themselves). This entails the introduction of a norm. Discriminated data are not simply data that have been paired, but measurements plotted along an axis of comparison. One data point may fall within a normal distribution, while another strays outside the norm within a zone of anomaly. Here Chun focuses on “homophily” (love of the same), writing that homophily “introduces normativity within a supposedly nonnormative system” (96).

    The third and fourth moments in Chun’s structural condition, tagged as “authenticity” and “recognition,” complete the narrative. Once groups are defined via discrimination, they are authenticated as a positive group identity, then ultimately recognized, or we could say self-recognized, by reversing the outward-facing discriminatory force into an inward-facing act of identification. It’s a complex libidinal economy that Chun patiently elaborates over four long chapters, linking these structural moments to specific technologies and techniques such as Bayes’ theorem, clustering algorithms, and facial recognition technology.

    A number of potential paths emerge in the wake of Chun’s work on correlation, which we will briefly mention in passing. One path would be toward Shane Denson’s recent volume, Discorrelated Images, on the loss of correlated experience in media aesthetics. Another would be to collide Chun’s critique of correlation in data science with Quentin Meillassoux’s critique of correlation in philosophy, notwithstanding the significant differences between their two projects.

    Correlation, discrimination, authentication, and recognition are the manifest contents of the book as it unfolds page by page. At the same time Chun puts forward a few meta arguments that span the text as a whole. The first is about difference and the second is about history. In both, Chun reveals herself as a metaphysician and moralist of the highest order.

    First Chun picks up a refrain familiar to feminism and anti-racist theory, that of erasure, forgetting, and ignorance. Marginalized people are erased from the archive; women are silenced; a subject’s embodiment is ignored. Chun offers an appealing catch phrase for this operation, “hopeful ignorance.” Many people in power hope that by ignoring difference they can overcome it. Or as Chun puts it, they “assume that the best way to fight abuse and oppression is by ignoring difference and discrimination” (2). Indeed this posture has been central to political liberalism for a long time, in for instance John Rawls’ derivation of justice via a “veil of ignorance.” For Chun the attempt to find an unmarked category of subjectivity — through that frequently contested pronoun “we” — will perforce erase and exclude those structurally denied access to the universal. “[John Perry] Barlow’s ‘we’ erased so many people,” Chun noted in dismay. “McLuhan’s ‘we’ excludes most of humanity” (9, 15). This is the primary crime for Chun, forgetting or ignoring the racialized and gendered body. (In her last book, Updating to Remain the Same, Chun reprinted a parody of a well-known New Yorker cartoon bearing the caption “On the Internet, nobody knows you’re a dog.” The posture of ignorance, of “nobody knowing,” was thoroughly critiqued by Chun in that book, even as it continues to be defended by liberals).

    Yet if the first crime against difference is to forget the mark, the second crime is to enforce it, to mince and chop people into segregated groups. After all, data is designed to discriminate, as Chun takes the better part of her book to elaborate. These are engines of difference and it’s no coincidence that Charles Babbage called his early calculating machine a “Difference Engine.” Data is designed to segregate, to cluster, to group, to split and mark people into micro identities. We might label this “bad” difference. Bad difference is when the naturally occurring multiplicity of the world is canalized into clans and cliques, leveraged for the machinations of power rather than the real experience of people.

    To complete the triad, Chun has proposed a kind of “good” difference. For Chun authentic life is rooted in difference, often found through marginalized experience. Her muse is “a world that resonates with and in difference” (3). She writes about “the needs and concerns of black women” (49). She attends to “those whom the archive seeks to forget” (237). Good difference is intersectional. Good difference attends to identity politics and the complexities of collective experience.

    Bad, bad, good — this is a triad, but not a dialectical one. Begin with 1) the bad tech posture of ignoring difference; followed by 2) the worse tech posture of specifying difference in granular detail; contrasted with 3) a good life that “resonates with and in difference.” I say “not dialectical” because the triad documents difference changing position rather than the position of difference changing (to paraphrase Catherine Malabou from her book on Changing Difference). Is bad difference resolved by good difference? How to tell the difference? For this reason I suggest we consider Discriminating Data as a moral tale — although I suspect Chun would balk at that adjective — because everything hinges on a difference between the good and the bad.

    Chun’s argument about good and bad difference is related to an argument about history, revealed through what she terms the “Transgressive Hypothesis.” I was captivated by this section of the book. It connects to a number of debates happening today in both theory and culture at large. Her argument about history has two distinct waves, and, following the contradictory convolutions of history, the second wave reverses and inverts the first.

    Loosely inspired by Michel Foucault’s Repressive Hypothesis, Chun’s Transgressive Hypothesis initially describes a shift in society and culture roughly coinciding with the Baby Boom generation in the late Twentieth Century. Let’s call it the 1968 mindset. Reacting to the oppressions of patriarchy, the grey-state threats of centralized bureaucracy, and the totalitarian menace of “Nazi eugenics and Stalinism,” liberation was found through “‘authentic transgression’” via “individualism and rebellion” (76). This was the time of the alternative, of the outsider, of the nonconformist, of the anti-authoritarian, the time of “thinking different.” Here being “alt” meant being left, albeit a new kind of left.

    Chun summons a familiar reference to make her point: the Apple Macintosh advertisement from 1984 directed by Ridley Scott, in which a scary Big Brother is dethroned by a colorful lady jogger brandishing a sledge hammer. “Resist, resist, resist,” was how Chun put the mantra. “To transgress…was to be free” (76). Join the resistance, unplug, blow your mind on red pills. Indeed the existential choice from The Matrix — blue pill for a life of slavery mollified by ignorance, red pill for enlightenment and militancy tempered by mortal danger — acts as a refrain throughout Chun’s book. In sum the Transgressive Hypothesis “equated democracy with nonnormative structures and behaviors” (76). To live a good life was to transgress.

    But this all changed in 1984, or thereabouts. Chun describes a “reverse hegemony” — a lovely phrase that she uses only twice — where “complaints against the ‘mainstream’ have become ‘mainstreamed’” (242). Power operates through reverse hegemony, she claims, “The point is never to be a ‘normie’ even as you form a norm” (34). These are the consequences of the rise of neoliberalism, fake corporate multiculturalism, Ronald Reagan and Margaret Thatcher but even more so Bill Clinton and Tony Blaire. Think postfordism and postmodernism. Think long tails and the multiplicity of the digital economy. Think woke-washing at CIA and Spike Lee shilling cryptocurrency. Think Hypernormalization, New Spirit of Capitalism, Theory of the Young Girl, To Live and Think Like Pigs. Complaints against the mainstream have become mainstreamed. And if power today has shifted “left,” then — Reverse Hegemony Brain go brrr — resistance to power shifts “right.” A generation ago the Q Shaman would have been a leftwing nut nattering about the Kennedy assassination. But today he’s a right wing nut (alas still nattering about the Kennedy assassination).

    “Red pill toxicity” (29) is how Chun characterizes the responses to this new topsy-turvy world of reverse hegemony. (To be sure, she’s only the latest critic weighing in on the history of the present; other well-known accounts include Angela Nagle’s 2017 book Kill All Normies, and Mark Fisher’s notorious 2013 essay “Exiting the Vampire Castle.”) And if libs, hippies, and anarchists had become the new dominant, the election of Donald Trump showed that “populism, paranoia, polarization” (77) could also reemerge as a kind of throwback to the worst political ideologies of the Twentieth Century. With Trump the revolutions of history — ironically, unstoppably — return to where they began, in “the totalitarian world view” (77).

    In other words these self-styled rebels never actually disrupted anything, according to Chun. At best they used disruption as a kind of ideological distraction for the same kinds of disciplinary management structures that have existed since time immemorial. And if Foucault showed that nineteenth-century repression also entailed an incitement to discourse, Chun describes how twentieth-century transgression also entailed a novel form of management. Before it was “you thought you were repressed but in fact you’re endlessly sublating and expressing.” Now it’s “you thought you were a rebel but disruption is a standard tactic of the Professional Managerial Class.” Or as Jacques Lacan said in response to some young agitators in his seminar, vous voulez un maître, vous l’aurez. Slavoj Žižek’s rendering, slightly embellished, best captures the gist: “As hysterics, you demand a new master. You will get it!

    I doubt Chun would embrace the word “hysteric,” a term indelibly marked by misogyny, but I wish she would, since hysteria is crucial to her Transgressive Hypothesis. In psychoanalysis, the hysteric is the one who refuses authority, endlessly and irrationally. And bless them for that; we need more hysterics in these dark times. Yet the lesson from Lacan and Žižek is not so much that the hysteric will conjure up a new master out of thin air. In a certain sense, the lesson is the reverse, that the Big Other doesn’t exist, that Big Brother himself is a kind of hysteric, that power is the very power that refuses power.

    This position makes sense, but not completely. As a recovering Deleuzian, I am indelibly marked by a kind of antinomian political theory that defines power as already heterogenous, unlawful, multiple, anarchic, and material. However I am also persuaded by Chun’s more classical posture, where power is a question of sovereign fiat, homogeneity, the central and the singular, the violence of the arche, which works through enclosure, normalization, and discipline. Faced with this type of power, Chun’s conclusion is, if I can compress a hefty book into a single writ, that difference will save us from normalization. In other words, while Chun is critical of the Transgressive Hypothesis, she ends up favoring the Big-Brother theory of power, where authentic alternatives escape repressive norms.

    I’ll admit it’s a seductive story. Who doesn’t want to believe in outsiders and heroes winning against oppressive villains? And the story is especially appropriate for the themes of Discriminating Data: data science of course entails norms and deviations; but also, in a less obvious way, data science inherits the old anxieties of skeptical empiricism, where the desire to make a general claim is always undercut by an inability to ground generality.

    Yet I suspect her political posture relies a bit too heavily on the first half of the Transgressive Hypothesis, the 1984 narrative of difference contra norm, even as she acknowledges the second half of the narrative where difference became a revanchist weapon for big tech (to say nothing of difference as a bonafide management style). This leads to some interesting inconsistencies. For instance Chun notes that Apple’s 1984 hammer thrower is a white woman disrupting an audience of white men. But she doesn’t say much else about her being a woman, or about the rainbow flag that ends the commercial. The Transgressive Hypothesis might be the quintessential tech bro narrative but it’s also the narrative of feminism, queerness, and the new left more generally. Chun avoids claiming that feminism failed; but she’s also savvy enough to avoid saying that it succeeded. And if Sadie Plant once wrote that “cybernetics is feminization,” for Chun it’s not so clear. According to Chun the cybernetic age of computers, data, and ubiquitous networks still orients around structures of normalization: masculine, white, straight, affluent and able-bodied. Resistant to such regimes of normativity, Chun must nevertheless invent a way to resist those who were resisting normativity.

    Regardless, for Chun the conclusion is clear: these hysterics got their new master. If not immediately they got it eventually, via the advent of Web 2.0 and the new kind of data-centric capitalism invented in the early 2000s. Correlation isn’t enough — and that’s the reason why. Correlation means the forming of a general relation, if only the most minimal generality of two paired data points. And, worse, correlation’s generality will always derive from past power and organization rather than from a reimagining of the present. Hence correlation for Chun is a type of structural pessimism, in that it will necessarily erase and exclude those denied access to the general relation.

    Characterized by a narrative poignancy and an attention to the ideological conditions of everyday life, Chun highlights alternative relations that could hopefully replace the pessimism of correlation. Such alternatives might take the form of a “potential history” or a “critical fabulation,” phrases borrowed from Ariella Azoulay and Saidiya Hartman, respectively. For Azoulay potential history means to “‘give an account of diverse worlds that persist’”; for Hartman, critical fabulation means “to see beyond numbers and sources” (79). A slim offering covering a few pages, nevertheless these references to Azoulay and Hartman indicate an appealing alternative for Chun, and she ends her book where it began, with an eloquent call to acknowledge “a world that resonates with and in difference.”

    _____

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), Laruelle: Against the Digital (University of Minnesota, 2014), and most recently, Uncomputable: Play and Politics in the Long Digital Age (Verso, 2021).

    Back to the essay

     

  • Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    a review of Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press Forerunners Series, 2016)

    by Michael Miller

    ~

    All existence is Beta, basically. A ceaseless codependent improvement unto death, but then death is not even the end. Nothing will be finalized. There is no end, no closure. The search will outlive us forever

    — Joshua Cohen, Book of Numbers

    Being a (in)human is to be a beta tester

    — Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    Too many people have access to your state of mind

    — Renata Adler, Speedboat

    Whenever I read through Vilém Flusser’s vast body of work and encounter, in print no less, one of the core concepts of his thought—which is that “human communication is unnatural” (2002, 5)––I find it nearly impossible to shake the feeling that the late Czech-Brazilian thinker must have derived some kind of preternatural pleasure from insisting on the ironic gesture’s repetition. Flusser’s rather grim view that “there is no possible form of communication that can communicate concrete experience to others” (2016, 23) leads him to declare that the intersubjective dimension of communication implies inevitably the existence of a society which is, in his eyes, itself an unnatural institution. One can find all over in Flusser’s work traces of his life-long attempt to think through the full philosophical implications of European nihilism, and evidence of this intellectual engagement can be readily found in his theories of communication.

    One of Flusser’s key ideas that draws me in is his notion that human communication affords us the ability to “forget the meaningless context in which we are completely alone and incommunicado, that is, the world in which we are condemned to solitary confinement and death: the world of ‘nature’” (2002, 4). In order to help stave off the inexorable tide of nature’s muted nothingness, Flusser suggests that humans communicate by storing memories, externalized thoughts whose eventual transmission binds two or more people into a system of meaning. Only when an intersubjective system of communication like writing or speech is established between people does the purpose of our enduring commitment to communication become clear: we communicate in order “to become immortal within others (2016, 31). Flusser’s playful positing of the ironic paradox inherent in the improbability of communication—that communication is unnatural to the human but it is also “so incredibly rich despite its limitations” (26)––enacts its own impossibility. In a representatively ironic sense, Flusser’s point is that all we are able to fully understand is our inability to understand fully.

    As Flusser’s theory of communication can be viewed as his response to the twentieth-century’s shifting technical-medial milieu, his ideas about communication and technics eventually led him to conclude that “the original intention of producing the apparatus, namely, to serve the interests of freedom, has turned on itself…In a way, the terms human and apparatus are reversed, and human beings operate as a function of the apparatus. A man gives an apparatus instructions that the apparatus has instructed him to give” (2011,73).[1] Flusser’s skeptical perspective toward the alleged affordances of human mastery over technology is most assuredly not the view that Apple or Google would prefer you harbor (not-so-secretly). Any cursory glance at Wired or the technology blog at Insider Higher Ed, to pick two long-hanging examples, would yield a radically different perspective than the one Flusser puts forth in his work. In fact, Flusser writes, “objects meant to be media may obstruct communication” (2016, 45). If media objects like the technical apparatuses of today actually obstruct communication, then why are we so often led to believe that they facilitate it? And to shift registers just slightly, if everything is said to be an object of some kind—even technical apparatuses––then cannot one be permitted to claim daily communion with all kinds of objects? What happens when an object—and an object as obsolete as a book, no less—speaks to us? Will we still heed its call?

    ***

    Speaking in its expanded capacity as neither narrator nor focalized character, the book as literary object addresses us in a direct and antagonistic fashion in the opening line to Joshua Cohen’s 2015 novel Book of Numbers. “If you’re reading this on a screen, fuck off. I’ll only talk if I’m gripped with both hands” (5), the book-object warns. As Cohen’s narrative tells the story of a struggling writer named Joshua Cohen (whose backstory corresponds mostly to the historical-biographical author Joshua Cohen) who is contracted to ghostwrite the memoir of another Joshua Cohen (who is the CEO of a massive Google-type company named Tetration), the novel’s middle section provides an “unedited” transcript of the conversation between the two Cohens in which the CEO recounts his upbringing and tremendous business success in and around the Bay Area from the late 1970s up to 2013 of the narrative’s present. The novel’s Silicon Valley setting, nominal and characterological doubling, and structural narrative coupling of the two Cohens’ lives makes it all but impossible to distinguish the personal histories of Cohen-the-CEO and Cohen-the-narrator from the cultural history of the development of personal computing and networked information technologies. The history of one Joshua Cohen––or all Joshua Cohens––is indistinguishable from the history of intrusive computational/digital media. “I had access to stuff I shouldn’t have had access to, but then Principal shouldn’t have had such access to me—cameras, mics,” Cohen-the-narrator laments. In other words, as Cohen-the-narrator ghostwrites another Cohen’s memoir within the context of the broad history of personal computing and the emergence of algorithmic governance and surveillance, the novel invites us to consider how the history of an individual––or every individual, it does not really matter––is also nothing more or anything less than the surveilled history of its data usage, which is always written by someone or something else, the ever-present Not-Me (who just might have the same name as me). The Self is nothing but a networked repository of information to be mined in the future.

    While the novel’s opening line addresses its hypothetical reader directly, its relatively benign warning fixes reader and text in a relation of rancor. The object speaks![2] And yet tech-savvy twenty-first century readers are not the only ones who seem to be fed up with books; books too are fed up with us, and perhaps rightly so. In an age when objects are said to speak vibrantly and withdraw infinitely; processes like human cognition are considered to be operative in complex technical-computational systems; and when the only excuse to preserve the category of “subjective experience” we are able to muster is that it affords us the ability “to grasp how networks technically distribute and disperse agency,” it would seem at first glance that the second-person addressee of the novel’s opening line would intuitively have to be a reading, thinking subject.[3] Yet this is the very same reading subject who has been urged by Cohen’s novel to politely “fuck off” if he or she has chosen to read the text on a screen. And though the text does not completely dismiss its readers who still prefer “paper of pulp, covers of board and cloth” (5), a slight change of preposition in its title points exactly to what the book fears most of all: Book as Numbers. The book-object speaks, but only to offer an ominous admonition: neither the book nor its readers ought to be reducible to computable numbers.

    The transduction of literary language into digital bits eliminates the need for a phenomenological, reading subject, and it suggests too that literature––or even just language in a general sense––and humans in particular are ontologically reducible to data objects that can be “read” and subsequently “interpreted” by computational algorithms. As Cohen’s novel collapses the distinction between author, narrator, character, and medium, its narrator observes that “the only record of my one life would be this record of another’s” (9). But in this instance, the record of one’s (or another’s) life is merely the history of how personal computational technologies have effaced the phenomenological subject. How have we arrived at the theoretically permissible premise that “People matter, but they don’t occupy a privileged subject position distinct from everything else in the world” (Huehls 20)? How might the “turn toward ontology” in theory/philosophy be viewed as contributing to our present condition?

    * **

    Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age (2016) provides a brief, yet stylistically ironic and incisive interrogation into how recent iterations of post- or inhumanist theory have found a strange bedfellow in the rhetorical boosterism that accompanies the alleged affordances of digital technologies and big data. Despite the differences between these two seemingly unrelated discourses, they both share a particularly critical or diminished conception of the anthro- in “anthropocentrism” that borrows liberally from the postulates of the “ontological turn” in theory/philosophy (Rosenberg n.p.). While the parallels between these discourses are not made explicit in Jarzombek’s book, Digital Stockholm Syndrome asks us to consider how a shared commitment to an ontologically diminished view of “the human” that galvanizes both technological determinism’s anti-humanism and post- or inhumanist theory has found its common expression in recent philosophies of ontology. In other words, the problem Digital Stockholm Syndrome takes up is this: what kind of theory of ontology, Being, and to a lesser extent, subjectivity, appeals equally to contemporary philosophers and Silicon Valley tech-gurus? Jarzombek gestures toward such an inquiry early on: “What is this new ontology?” he asks, and “What were the historical situations that produced it? And how do we adjust to the realities of the new Self?” (x).

    A curious set of related philosophical commitments united by their efforts to “de-center” and occasionally even eject “anthropocentrism” from the critical conversation constitute some of the realities swirling around Jarzombek’s “new Self.”[4] Digital Stockholm Syndrome provocatively locates the conceptual legibility of these philosophical realities squarely within an explicitly algorithmic-computational historical milieu. By inviting such a comparison, Jarzombek’s book encourages us to contemplate how contemporary ontological thought might mediate our understanding of the historical and philosophical parallels that bind the tradition of in humanist philosophical thinking and the rhetoric of twenty-first century digital media.[5]

    In much the same way that Alexander Galloway has argued for a conceptual confluence that exceeds the contingencies of coincidence between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), Digital Stockholm Syndrome argues similarly that today’s world is “designed from the micro/molecular level to fuse the algorithmic with the ontological” (italics in original, x).[6] We now understand Being as the informatic/algorithmic byproduct of what ubiquitous computational technologies have gathered and subsequently fed back to us. Our personal histories––or simply the records of our data use (and its subsequent use of us)––comprise what Jarzombek calls our “ontic exhaust…or what data experts call our data exhaust…[which] is meticulously scrutinized, packaged, formatted, processed, sold, and resold to come back to us in the form of entertainment, social media, apps, health insurance, clickbait, data contracts, and the like” (x).

    The empty second-person pronoun is placed on equal ontological footing with, and perhaps even defined by, its credit score, medical records, 4G data usage, Facebook likes, and threefold of its Tweets. “The purpose of these ‘devices,’” Jarzombek writes, “is to produce, magnify, and expose our ontic exhaust” (25). We give our ontic exhaust away for free every time we log into Facebook because it, in return, feeds back to us the only sense of “self” we are able to identify as “me.”[7] If “who we are cannot be traced from the human side of the equation, much less than the analytic side.‘I’ am untraceable” (31), then why do techno-determinists and contemporary oracles of ontology operate otherwise? What accounts for their shared commitment to formalizing ontology? Why must the Self be tracked and accounted for like a map or a ledger?

    As this “new Self,” which Jarzombek calls the “Being-Global” (2), travels around the world and checks its bank statement in Paris or tags a photo of a Facebook friend in Berlin while sitting in a cafe in Amsterdam, it leaks ontic exhaust everywhere it goes. While the hoovering up of ontic exhaust by GPS and commercial satellites “make[s] us global,” it also inadvertently redefines Being as a question of “positioning/depositioning” (1). For Jarzombek, the question of today’s ontology is not so much a matter of asking “what exists?” but of asking “where is it and how can it be found?” Instead of the human who attempts to locate and understand Being, now Being finds us, but only as long as we allow ourselves to be located.

    Today’s ontological thinking, Jarzombek points out, is not really interested in asking questions about Being––it is too “anthropocentric.”[8] Ontology in the twenty-first century attempts to locate Being by gathering data, keeping track, tracking changes, taking inventory, making lists, listing litanies, crunching the numbers, and searching the database. “Can I search for it on Google?” is now the most important question for ontological thought in the twenty-first century.

    Ontological thinking––which today means ontological accounting, or finding ways to account for the ontologically actuarial––is today’s philosophical equivalent to a best practices for data management, except there is no difference between one’s data and one’s Self. Nonetheless, any ontological difference that might have once stubbornly separated you from data about you no longer applies. Digital Stockholm Syndrome identifies this shift with the formulation: “From ontology to trackology” (71).[9] The philosophical shift that has allowed data about the Self to become the ontological equivalent to the Self emerges out of what Jarzombek calls an “animated ontology.”

    In this “animated ontology,” subject position and object position are indistinguishable…The entire system of humanity is microprocessed through the grid of sequestered empiricism” (31, 29). Jarzombek is careful to distinguish his “animated ontology” from the recently rebooted romanticisms which merely turn their objects into vibrant subjects. He notes that “the irony is that whereas the subject (the ‘I’) remains relatively stable in its ability to self-affirm (the lingering by-product of the psychologizing of the modern Self), objectivity (as in the social sciences) collapses into the illusions produced by the global cyclone of the informatic industry” (28).”[10] By devising tricky new ways to flatten ontology (all of which are made via po-faced linguistic fiat), “the human and its (dis/re-)embodied computational signifiers are on equal footing”(32). I do not define my data, but my data define me.

    ***

    Digital Stockholm Syndrome asserts that what exists in today’s ontological systems––systems both philosophical and computational––is what can be tracked and stored as data. Jarzombek sums up our situation with another pithy formulation: “algorithmic modeling + global positioning + human scaling +computational speed=data geopolitics” (12). While the universalization of tracking technologies defines the “global” in Jarzombek’s Being-Global, it also provides us with another way to understand the humanities’ enthusiasm for GIS and other digital mapping platforms as institutional-disciplinary expressions of a “bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-Self” (5).

    Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    One wonders if the incessant cultural and political reminders regarding the humanities’ waning relevance have moved humanists to reconsider the very basic intellectual terms of their broad disciplinary pursuits. How come it is humanities scholars who are in some cases most visibly leading the charge to overturn many decades of humanist thought? Has the internalization of this depleted conception of the human reshaped the basic premises of humanities scholarship, Digital Stockholm Syndrome wonders? What would it even mean to pursue a “humanities” purged of “the human?” And is it fair to wonder if this impoverished image of humanity has trickled down into the formation of new (sub)disciplines?”[11]

    In a late chapter titled “Onto-Paranoia,” Jarzombek finally arrives at a working definition of Digital Stockholm Syndrome: data visualization. For Jarzombek, data-visualization “has been devised by the architects of the digital world” to ease the existential torture—or “onto-torture”—that is produced by Security Threats (59). Security threats are threatening because they remind us that “security is there to obscure the fact that whole purpose is to produce insecurity” (59). When a system fails, or when a problem occurs, we need to be conscious of the fact that the system has not really failed; “it means that the system is working” (61).[12] The Social, the Other, the Not-Me—these are all variations of the same security threat, which is just another way of defining “indeterminacy” (66). So if everything is working the way it should, we rarely consider the full implications of indeterminacy—both technical and philosophical—because to do so might make us paranoid, or worse: we would have to recognize ourselves as (in)human subjects.

    Data-visualizations, however, provide a soothing salve which we can (self-)apply in order to ease the pain of our “onto-torture.” Visualizing data and creating maps of our data use provide us with a useful and also pleasurable tool with which we locate ourselves in the era of “post-ontology.”[13] “We experiment with and develop data visualization and collection tools that allow us to highlight urban phenomena. Our methods borrow from the traditions of science and design by using spatial analytics to expose patterns and communicating those results, through design, to new audiences,” we are told by one data-visualization project (http://civicdatadesignlab.org/).  As we affirm our existence every time we travel around the globe and self-map our location, we silently make our geo-data available for those who care to sift through it and turn it into art or profit.

    “It is a paradox that our self-aestheticizing performance as subjects…feeds into our ever more precise (self-)identification as knowable and predictable (in)human-digital objects,” Jarzombek writes. Yet we ought not to spend too much time contemplating the historical and philosophical complexities that have helped create this paradoxical situation. Perhaps it is best we do not reach the conclusion that mapping the Self as an object on digital platforms increases the creeping unease that arises from the realization that we are mappable, hackable, predictable, digital objects––that our data are us. We could, instead, celebrate how our data (which we are and which is us) is helping to change the world. “’Big data’ will not change the world unless it is collected and synthesized into tools that have a public benefit,” the same data visualization project announces on its website’s homepage.

    While it is true that I may be a little paranoid, I have finally rested easy after having read Digital Stockholm Syndrome because I now know that my data/I are going to good use.[14] Like me, maybe you find comfort in knowing that your existence is nothing more than a few pixels in someone else’s data visualization.

    _____

    Michael Miller is a doctoral candidate in the Department of English at Rice University. His work has appeared or is forthcoming in symplokē and the Journal of Film and Video.

    Back to the essay

    _____

    Notes

    [1] I am reminded of a similar argument advanced by Tung-Hui Hu in his A Prehistory of the Cloud (2016). Encapsulating Flusser’s spirit of healthy skepticism toward technical apparatuses, the situation that both Flusser and Hu fear is one in which “the technology has produced the means of its own interpretation” (xixx).

    [2] It is not my aim to wade explicitly into discussions regarding “object-oriented ontology” or other related philosophical developments. For the purposes of this essay, however, Andrew Cole’s critique of OOO as a “new occasionalism” will be useful. “’New occasionalism,’” Cole writes, “is the idea that when we speak of things, we put them into contact with one another and ourselves” (112). In other words, the speaking of objects makes them objectively real, though this is only possible when everything is considered to be an object. The question, though, is not about what is or is not an object, but is rather what it means to be. For related arguments regarding the relation between OOO/speculative realism/new materialism and mysticism, see Sheldon (2016), Altieri (2016), Wolfendale (2014), O’Gorman (2013), and to a lesser extent Colebrook (2013).

    [3] For the full set of references here, see Bennett (2010), Hayles (2014 and 2016), and Hansen (2015).

    [4] While I cede that no thinker of “post-humanism” worth her philosophical salt would admit the possibility or even desirability of purging the sins of “correlationism” from critical thought all together, I cannot help but view such occasional posturing with a skeptical eye. For example, I find convincing Barbara Herrnstein-Smith’s recent essay “Scientizing the Humanities: Shifts, Negotiations, Collisions,” in which she compares the drive in contemporary critical theory to displace “the human” from humanistic inquiry to the impossible and equally incomprehensible task of overcoming the “‘astro’-centrism of astronomy or the biocentrism of biology” (359).

    [5] In “Modest Proposal for the Inhuman,” Julian Murphet identifies four interrelated strands of post- or inhumanist thought that combine a kind of metaphysical speculation with a full-blown demolition of traditional ontology’s conceptual foundations. They are: “(1) cosmic nihilism, (2) molecular bio-plasticity, (3) technical accelerationism, and (4) animality. These sometimes overlapping trends are severally engaged in the mortification of humankind’s stubborn pretensions to mastery over the domain of the intelligible and the knowable in an era of sentient machines, routine genetic modification, looming ecological disaster, and irrefutable evidence that we share 99 percent of our biological information with chimpanzees” (653).

    [6] The full quotation from Galloway’s essay reads: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism? [….] Why, in short, is there a coincidence between today’s ontologies and the software of big business?” (347). Digital Stockholm Syndrome begins by accepting Galloway’s provocations as descriptive instead of speculative. We do not necessarily wonder in 2017 if “there is a coincidence between today’s ontologies and the software of big business”; we now wonder instead how such a confluence came to be.

    [7] Wendy Hui Kyun Chun makes a similar point in her 2016 monograph Updating to Remain the Same: Habitual New Media. She writes, “If users now ‘curate’ their lives, it is because their bodies have become archives” (x-xi). While there is not ample space here to discuss the  full theoretical implications of her book, Chun’s discussion of the inherently gendered dimension to confession, self-curation as self-exposition, and online privacy as something that only the unexposed deserve (hence the need for preemptive confession and self-exposition on the internet) in digital/social media networks is tremendously relevant to Jarzombek’s Digital Stockholm Syndrome, as both texts consider the Self as a set of mutable and “marketable/governable/hackable categories” (Jarzombek 26) that are collected without our knowledge and subsequently fed back to the data/media user in the form of its own packaged and unique identity. For recent similar variations of this argument, see Simanowski (2017) and McNeill (2012).

    I also think Chun’s book offers a helpful tool for thinking through recent confessional memoirs or instances of “auto-theory” (fictionalized or not) like Maggie Nelson’s The Argonauts (2015), Sheila Heti’s How Should a Person Be (2010), Marie Calloway’s what purpose did i serve in your life (2013), and perhaps to a lesser degree Tao Lin’s Richard Yates (2010), Taipei (2013), Natasha Stagg’s Surveys, and Ben Lerner’s Leaving the Atocha Station (2011) and 10:04 (2014). The extent to which these texts’ varied formal-aesthetic techniques can be said to be motivated by political aims is very much up for debate, but nonetheless, I think it is fair to say that many of them revel in the reveal. That is to say, via confession or self-exposition, many of these novels enact the allegedly performative subversion of political power by documenting their protagonists’ and/or narrators’ certain social/political acts of transgression. Chun notes, however, that this strategy of self-revealing performs “resistance as a form of showing off and scandalizing, which thrives off moral outrage. This resistance also mimics power by out-spying, monitoring, watching, and bringing to light, that is, doxing” (151). The term “autotheory,” which was has been applied to Nelson’s The Argonauts in particular, takes on a very different meaning in this context. “Autotheory” can be considered as a theory of the self, or a self-theorization, or perhaps even the idea that personal experience is itself a kind of theory might apply here, too. I wonder, though, how its meaning would change if the prefix “auto” was understood within a media-theoretical framework not as “self” but as “automation.” “Autotheory” becomes, then, an automatization of theory or theoretical thinking, but also a theoretical automatization; or more to the point: what if “autotheory” describes instead a theorization of the Self or experience wherein “the self” is only legible as the product of automated computational-algorithmic processes?

    [8] Echoing the critiques of “correlationism” or “anthropocentrism” or what have you, Jarzombek declares that “The age of anthrocentrism is over” (32).

    [9] Whatever notion of (self)identity the Self might find to be most palatable today, Jarzombek argues, is inevitably mediated via global satellites. “The intermediaries are the satellites hovering above the planet. They are what make us global–what make me global” (1), and as such, they represent the “civilianization” of military technologies (4). What I am trying to suggest is that the concepts and categories of self-identity we work with today are derived from the informatic feedback we receive from long-standing military technologies.

    [10] Here Jarzombek seems to be suggesting that the “object” in the “objectivity” of “the social sciences” has been carelessly conflated with the “object” in “object-oriented” philosophy. The prioritization of all things “objective” in both philosophy and science has inadvertently produced this semantic and conceptual slippage. Data objects about the Self exist, and thus by existing, they determine what is objective about the Self. In this new formulation, what is objective about the Self or subject, in other words, is what can be verified as information about the self. In Indexing It All: The Subject in the Age of Documentation, Information, and Data (2014), Ronald Day argues that these global tracking technologies supplant traditional ontology’s “ideas or concepts of our human manner of being” and have in the process “subsume[d] and subvert[ed] the former roles of personal judgment and critique in personal and social beings and politics” (1). While such technologies might be said to obliterate “traditional” notions of subjectivity, judgment, and critique, Day demonstrates how this simultaneous feeding-forward and feeding back of data-about-the-Self represents the return of autoaffection, though in his formulation self-presence is defined as information or data-about-the-self whose authenticity is produced when it is fact-checked against a biographical database (3)—self-presence is a presencing of data-about-the-Self. This is all to say that the Self’s informational “aboutness”–its representation in and as data–comes to stand in for the Self’s identity, which can only be comprehended as “authentic” in its limited metaphysical capacity as a general informatic or documented “aboutness.”

    [11] Flusser is again instructive on this point, albeit in his own idiosyncratic way­­. Drawing attention to the strange unnatural plurality in the term “humanities,” he writes, “The American term humanities appropriately describes the essence of these disciplines. It underscores that the human being is an unnatural animal” (2002, 3). The plurality of “humanities,” as opposed to the singular “humanity,” constitutes for Flusser a disciplinary admission that not only the category of “the human” is unnatural, but that the study of such an unnatural thing is itself unnatural, as well. I think it is also worth pointing out that in the context of Flusser’s observation, we might begin to situate the rise of “the supplemental humanities” as an attempt to redefine the value of a humanities education. The spatial humanities, the energy humanities, medical humanities, the digital humanities, etc.—it is not difficult to see how these disciplinary off-shoots consider themselves as supplements to whatever it is they think “the humanities” are up to; regardless, their institutional injection into traditional humanistic discourse will undoubtedly improve both(sub)disciplines, with the tacit acknowledgment being that the latter has just a little more to gain from the former in terms of skills, technical know-how, and data management. Many thanks to Aaron Jaffe for bringing this point to my attention.

    [12] In his essay “Algorithmic Catastrophe—The Revenge of Contingency,” Yuk Hui notes that “the anticipation of catastrophe becomes a design principle” (125). Drawing from the work of Bernard Stiegler, Hui shows how the pharmacological dimension of “technics, which aims to overcome contingency, also generates accidents” (127). And so “as the anticipation of catastrophe becomes a design principle…it no longer plays the role it did with the laws of nature” (132). Simply put, by placing algorithmic catastrophe on par with a failure of reason qua the operations of mathematics, Hui demonstrates how “algorithms are open to contingency” only insofar as “contingency is equivalent to a causality, which can be logically and technically deduced” (136). To take Jarzombek’s example of the failing computer or what have you, while the blue screen of death might be understood to represent the faithful execution of its programmed commands, we should also keep in mind that the obverse of Jarzombek’s scenario would force us to come to grips with how the philosophical implications of the “shit happens” logic that underpins contingency-as-(absent) causality “accompanies and normalizes speculative aesthetics” (139).

    [13] I am reminded here of one of the six theses from the manifesto “What would a floating sheep map?,” jointly written by the Floating Sheep Collective, which is a cohort of geography professors. The fifth thesis reads: “Map or be mapped. But not everything can (or should) be mapped.” The Floating Sheep Collective raises in this section crucially important questions regarding ownership of data with regard to marginalized communities. Because it is not always clear when to map and when not to map, they decide that “with mapping squarely at the center of power struggles, perhaps it’s better that not everything be mapped.” If mapping technologies operate as ontological radars–the Self’s data points help point the Self towards its own ontological location in and as data—then it is fair to say that such operations are only philosophically coherent when they are understood to be framed within the parameters outlined by recent iterations of ontological thinking and its concomitant theoretical deflation of the rich conceptual make-up that constitutes the “the human.” You can map the human’s data points, but only insofar as you buy into the idea that points of data map the human. See http://manifesto.floatingsheep.org/.

    [14]Mind/paranoia: they are the same word!”(Jarzombek 71).

    _____

    Works Cited

    • Adler, Renata. Speedboat. New York Review of Books Press, 1976.
    • Altieri, Charles. “Are We Being Materialist Yet?” symplokē 24.1-2 (2016):241-57.
    • Calloway, Marie. what purpose did i serve in your life. Tyrant Books, 2013.
    • Chun, Wendy Hui Kyun. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
    • Cohen, Joshua. Book of Numbers. Random House, 2015.
    • Cole, Andrew. “The Call of Things: A Critique of Object-Oriented Ontologies.” minnesota review 80 (2013): 106-118.
    • Colebrook, Claire. “Hypo-Hyper-Hapto-Neuro-Mysticism.” Parrhesia 18 (2013).
    • Day, Ronald. Indexing It All: The Subject in the Age of Documentation, Information, and Data. The MIT Press, 2014.
    • Floating Sheep Collective. “What would a floating sheep map?” http://manifesto.floatingsheep.org/.
    • Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. University of Minnesota Press, 2011.
    • –––. The Surprising Phenomenon of Human Communication. 1975. Metaflux, 2016.
    • –––. Writings, edited by Andreas Ströhl. Translated by Erik Eisel. University of Minnesota Press, 2002.
    • Galloway, Alexander R. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39.2 (2013): 347-366.
    • Hansen, Mark B.N. Feed Forward: On the Future of Twenty-First Century Media. Duke University Press, 2015.
    • Hayles, N. Katherine. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness.” New Literary History 45.2 (2014):199-220.
    • –––. “The Cognitive Nonconscious: Enlarging the Mind of the Humanities.” Critical Inquiry 42 (Summer 2016): 783-808.
    • Herrnstein-Smith, Barbara. “Scientizing the Humanities: Shifts, Collisions, Negotiations.” Common Knowledge  22.3 (2016):353-72.
    • Heti, Sheila. How Should a Person Be? Picador, 2010.
    • Hu, Tung-Hui. A Prehistory of the Cloud. The MIT Press, 2016.
    • Huehls, Mitchum. After Critique: Twenty-First Century Fiction in a Neoliberal Age. Oxford University Press, 2016.
    • Hui, Yuk. “Algorithmic Catastrophe–The Revenge of Contingency.” Parrhesia 23(2015): 122-43.
    • Jarzombek, Mark. Digital Stockholm Syndrome in the Post-Ontological Age. University of Minnesota Press, 2016.
    • Lin, Tao. Richard Yates. Melville House, 2010.
    • –––. Taipei. Vintage, 2013.
    • McNeill, Laurie. “There Is No ‘I’ in Network: Social Networking Sites and Posthuman Auto/Biography.” Biography 35.1 (2012): 65-82.
    • Murphet, Julian. “A Modest Proposal for the Inhuman.” Modernism/Modernity 23.3(2016): 651-70.
    • Nelson, Maggie. The Argonauts. Graywolf P, 2015.
    • O’Gorman, Marcel. “Speculative Realism in Chains: A Love Story.” Angelaki: Journal of the Theoretical Humanities 18.1 (2013): 31-43.
    • Rosenberg, Jordana. “The Molecularization of Sexuality: On Some Primitivisms of the Present.” Theory and Event 17.2 (2014):  n.p.
    • Sheldon, Rebekah. “Dark Correlationism: Mysticism, Magic, and the New Realisms.” symplokē 24.1-2 (2016): 137-53.
    • Simanowski, Roberto. “Instant Selves: Algorithmic Autobiographies on Social Network Sites.” New German Critique 44.1 (2017): 205-216.
    • Stagg, Natasha. Surveys. Semiotext(e), 2016.
    • Wolfendale, Peter. Object Oriented Philosophy: The Noumenon’s New Clothes. Urbanomic, 2014.