boundary 2

Tag: big data

  • Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    Zachary Loeb — Is Big Data the Message? (Review of Natasha Lushetich, ed., Big Data—A New Medium?)

    a review of Natasha Lushetich, ed. Big Data—A New Medium? (Routledge, 2021)

    by Zachary Loeb

    When discussing the digital, conversations can quickly shift towards talk of quantity. Just how many images are being uploaded every hour, how many meticulously monitored purchases are being made on a particular e-commerce platform every day, how many vehicles are being booked through a ride-sharing app at 3 p.m. on Tuesday afternoon, how many people are streaming how many shows/movies/albums at any given time? The specific answer to the “how much?” and “how many?” will obviously vary depending upon the rest of the question, yet if one wanted to give a general response across these questions it would likely be fair to answer with some version of “a heck of a lot.” Yet from this flows another, perhaps more complicated and significant question, namely: given the massive amount of information being generated by seemingly every online activity, where does all of that information actually go, and how is that information rendered usable and useful? To this the simple answer may be “big data,” but this in turn just serves to raise the question of what we mean by “big data.”

    “Big data” denotes the point at which data begins to be talked about in terms of scale, not merely gigabytes but zettabytes. And, to be clear, a zettabyte represents a trillion gigabytes—and big data is dealing with zettabytes, plural. Beyond the sheer scale of the quantity in question, considering big data “as process and product” involves a consideration of “the seven Vs: volume” (the amount of data previously generated and newly generated), “variety” (the various sorts of data being generated), “velocity” (the highly accelerated rate at which data is being generated), “variability” (the range of types of information that make up big data), “visualization” (how this data can be visually represented to a user), “value” (how much all of that data is worth, especially once it can be processed in a useful way), and “veracity” (3) (the reliability, trustworthiness, and authenticity of the data being generated). In addition to these “seven Vs” there are also the “three Hs: high dimension, high complexity, and high uncertainty” (3). Granted, “many of these terms remain debatable” (3). Big data is both “process and product” (3), its applications vary from undergirding the sorts of real-time analysis that makes it possible to detect viral outbreaks as they are happening to the directions app that is able to suggest an alternative route before you hit traffic to the recommendation software (be it banal or nefarious) that forecast future behavior based on past actions.

    To the extent that discussions around the digital generally focus on the end(s) results of big data, the means remain fairly occluded both from public view and from many of the discussants. And while big data has largely been accepted as an essential aspect of our digital lives by some, for many others it remains highly fraught.

    As Natasha Lushetich notes, “in the arts and (digital) humanities…the use of big data remains a contentious issue not only because data architectures are increasingly determining classificatory systems in the educational, social, and medical realms, but because they reduce political and ethical questions to technical management” (4). And it is this contentiousness that is at the heart of Lushetich’s edited volume Big Data—A New Medium? (Routledge, 2021). Drawing together scholars from a variety of different disciplines ranging across “the arts and (digital) humanities,” this book moves beyond an analysis of what big data is to a complex considerations of what big data could be (and may be in the process of currently becoming). In engaging with the perils and potentialities of big data, the book (as its title suggests) wrestles with the question as to whether or not big data can be seen as constituting “a new medium.” Through engaging with big data as a medium, the contributors to the volume grapple not only with how big data “conjugates human existence” but also how it “(re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” and how it “enhances, obsolesces, retrieves and pushes to the limits of potentiality” (8). Across four sections, the contributors grapple with big data in terms of knowledge and time, use and extraction, cultural heritage and memory, as well as people.

    “Patterning Knowledge and Time” begins with a chapter by Ingrid M. Hoofd that places big data in the broader trajectory of the university’s attempt to make the whole of the world knowable. Considering how “big data renders its object of analysis simultaneously more unknowable (or superficial) and more knowable (or deep)” (18), Hoofd’s chapter examines how big data replicates and reinforces the ways in which that which becomes legitimated as knowable are the very things that can be known through the university’s (and big data’s) techniques. Following Hoofd, Franco “Bifo” Berardi provocatively engages with the power embedded in big data, treating it as an attempt to assert computerized control over a chaotic future by forcing it into a predictable model. Here big data is treated as a potential constraint wherein “the future is no longer  a possibility, but the implementation of a logical necessity inscribed in the present” (43), as participation in society becomes bound up with making one’s self and one’s actions legible and analyzable to the very systems that enclose one’s future horizons. Shifting towards the visual and the environmental, Abelardo Gil-Fournier and Jussi Parikka consider the interweaving of images and environments and how data impacts this. As Gil-Fournier and Parikka explore, as a result of developments in machine learning and computer vision “meteorological changes” are increasingly “not only observable but also predictable as images” (56).

    The second part of the book, “Patterning Use and Existence” starts with Btihaj Ajana reflecting on the ways in which “surveillance technologies are now embedded in our everyday products and services” (64). By juxtaposing the biometric control of refugees with the quantified-self movement, Ajana explores the datafication of society and the differences (as well as similarities) between willing participation and forced participation in regimes of surveillance of the self. Highlighting a range of well-known gig-economy platforms (such as Uber, Deliveroo, and Amazon Mechanical Turk), Tim Christaens examines the ways that “the speed of the platform’s algorithms exceeds the capacities of human bodies” (81). While offering a thorough critique of the inhuman speed imposed by gig economy platforms/algorithms, Christaens also offers a hopeful argument for the possibility that by making their software open source some of these gig platforms could “become a vehicle for social emancipation instead of machinic subjugation” (90). While aesthetic and artistic considerations appear in earlier chapters, Lonce Wyse’s chapter pushes fully into this area through looking at the ways that deep learning systems create the sorts of works of art “that, when recognized in humans, are thought of as creative” (95). Wyse provides a rich, and yet succinct, examination of how these systems function while highlighting the sorts of patterns that emerge (sometimes accidentally) in the process of training these systems.

    At the outset of the book’s third section, “Patterning cultural heritage and memory,” Craig J. Saper approaches the magazine The Smart Set as an object of analysis and proceeds to zoom in and zoom out to reveal what is revealed and what is obfuscated at different scales. Highlighting that “one cannot arbitrarily discount or dismiss particular types of data, big or intimate, or approaches to reading, distant or close” Saper’s chapter demonstrates how “all scales carry intellectual weight” (124). Moving away from the academic and the artist, Nicola Horsley’s chapter reckons with the work of archivists and the ways in which their intellectual labor and the tasks of their profession have been challenged by digital shifts. While archival training teaches archivists that “the historical record, on which collective memory is based, is a process not a product” (140) and in interacting with researchers archivists seek to convey that lesson, Horsley’s considers the ways in which the shift away from the physical archive and towards the digital archive (wherein a researcher may never directly interact with an archivist or librarian) means this “process” risks going unseen. From the archive to the work of art, Natasha Lushetich and Masaki Fujihata’s chapter explores Fujihata’s project BeHere: The Past in the Present and how augmented reality opens up the space for new artistic experience and challenges how individual memory is constructed. Through its engagement with “images obtained through data processing and digital frottage” the BeHere project reveals “new configurations of machinically (rather than humanly) perceived existents” and thus can “shed light on that which eludes the (naked) human eye” (151).

    The fourth and final section of the volume, begins with Dominic Smith’s exploration of the aesthetics of big data. While referring back to the “Seven Vs” of big data, Smith argues that to imagine big data as a “new medium” requires considering “how we make sense of data” in regards to both “how we produce it” and “how we perceive it” (164). A matter which Smith explores through an analysis of “surfaces and depths” of oceanic images. Though big data is closely connected with sheer scale (hence the “big”), Mitra Azar observes that “it is never enough as it is always possible to generate new data and make more comprehensive data sets” (180). Tangling with this in a visual registry, Azar contrasts the cinematic point of view with that of the big data enabled “data double” of the individual (which is meant to stand in for that user). Considering several of his own artistic installations—Babel, Dark Matter, and Heteropticon—Simon Biggs examines the ways in which big data reveals “the everyday and trivial and how it offers insights into the dense ambient noise that is our daily lives” (192). In contrast to treating big data as a revelator of the sublime, Biggs discusses big data’s capacity to show “the infra-ordinary” and to show the value of seemingly banal daily details. The book concludes with Warren Neidich’s speculative gaze to what the future of big data might portend, couched in a belief that “we are at the beginning of a transition from knowledge-based economics to a neural or brain-based economy” (207). Surveying current big data technologies and the trajectories they may suggest, Neidich forecasts “a gradual accumulation of telepathic technocueticals” such that “at some moment a critical point might be reached when telepathy could become a necessary skill for successful adaptation…similar to being able to read in today’s society” (218).

    In the introduction to the book, Natasha Lushetich grounds the discussion in a recognition that “it is also important to ask how big data (re)articulates time, space, the material and immaterial world, the knowable and the unknowable; how it navigates or alters, hierarchies of importance” (8), and over the course of this fascinating and challenging volume, the many contributors do just that.

    ***

    The term big data captures the way in which massive troves of digitally sourced information are made legible and understandable. Yet one of the challenges of discussing big data is trying to figure out a way to make big data itself legible and understandable. In discussions around the digital, big data is often gestured at rather obliquely as the way to explain a lot of mysterious technological activity in the background. We may not find ourselves capable, for a variety of reasons, of prying open the various black boxes of a host of different digital systems but stamped in large letters on the outside of that box are the words “big data.” When shopping online or using a particular app, a user may be aware that the information being gathered from their activities is feeding into big data and that the recommendations being promoted to them come courtesy of the same. Or they may be obliquely aware that there is some sort of connection between the mystery shrouded algorithms and big data. Or the very evocation of “big” when twinned with a recognition of surveillance technologies may serve as a discomforting reminder of “big brother.” Or “big data” might simply sound like a non-existent episode of Star Trek: The Next Generation in which Lieutenant Commander Data is somehow turned into a giant. All of which is to say, that though big data is not a new matter, the question of how to think about it (which is not the same as how to use and be used by it) remains a challenging issue.

    With Big Data—A New Medium?, Natasha Lushetich has assembled an impressive group of thinkers to engage with big data in a novel way. By raising the question of big data as “a new medium,” the contributors shift the discussion away from considerations focused on surveillance and algorithms to wrestle with the ways that big data might be similar and distinct from other mediums. While this shift does not represent a rejection, or move to ignore, the important matters related to issues like surveillance, the focus on big data as a medium raises a different set of questions. What are the aesthetics of big data? As a medium what are the affordances of big data? And what does it mean for other mediums that in the digital era so many of those mediums are themselves being subsumed by big data? After all, so many of the older mediums that theorists have grown so accustomed to discussing have undergone some not insignificant changes as a result of big data. And yet to engage with big data as a medium also opens up a potential space for engaging with big data that does not treat it as being wholly captured and controlled by large tech firms.

    The contributors to the volume do not seem to be fully in agreement with one another about whether big data represents poison or panacea, but the chapters are clearly speaking to one another instead of shouting over each other. There are certainly some contributions to the book, notably Berardi’s, with its evocation of a “new century suspended between two opposite polarities: chaos and automaton” (44), that seem a bit more pessimistic. While other contributors, such as Christaens, engage with the unsavory realities of contemporary data gathering regimes but envision the ways that these can be repurposed to serve users instead of large companies. And such optimistic and pessimistic assessments come up against multiple contributions that eschew such positive/negative framings in favor of an artistically minded aesthetic engagement with what it means to treat big data as a medium for the creation of works of art. Taken together, the chapters in the book provide a wide-ranging assessment of big data, one which is grounded in larger discussions around matters such as surveillance and algorithmic bias, but which pushes readers to think of big data beyond those established frameworks.

    As an edited volume, one of the major strengths of Big Data—A New Medium? is the way it brings together perspectives from such a variety of fields and specialties. As part of Routledge’s “studies in science, technology, and society” series, the volume demonstrates the sort of interdisciplinary mixing that makes STS such a vital space for discussions of the digital. Granted, this very interdisciplinary richness can serve to be as much benefit as burden, as some readers will wish there had been slightly more representation of their particular subfield, or wish that the particular scholarly techniques of a particular discipline had seen greater use. Case in point: Horsley’s contribution will be of great interest to those approaching this book from the world of libraries and archives (and information schools more generally), and some of those same readers will wish that other chapters in the book had been equally attentive to the work done by archive professionals. Similarly those who approach the book from fields more grounded in historical techniques may wish that more of the authors had spent more time engaging with “how we got here” instead of focusing so heavily on the exploration of the present and the possible future. Of course, these are always the challenges with edited interdisciplinary volumes, and it is a major credit to Lushetich as an editor that this volume provides readers from so many different backgrounds with so much to mull over. Beyond presenting numerous perspectives on the titular question, the book is also an invitation to artists and academics to join in discussion about that titular question.

    Those who are broadly interested in discussions around big data will find much in this volume of significance, and will likely find their own thinking pushed in novel directions. That being said, this book will likely be most productively read by those who are already somewhat conversant in debates around big data/the digital humanities/the arts/and STS more generally. While contributors are consistently careful in clearly defining their terms and referencing the theorists from whom they are drawing, from Benjamin to Foucault to Baudrillard to Marx to Deleuze and Guattari (to name but a few), the contributors to this book couch much of their commentary in theory, and a reader of this volume will be best able to engage with these chapters if they have at least some passing familiarity with those theorists themselves. Many of the contributors to this volume are also clearly engaging with arguments made by Shoshana Zuboff in Surveillance Capitalism and this book can be very productively read as critique and complement to Zuboff’s tome. Academics in and around STS, and artists who incorporate the digital into their practice, will find that this book makes a worthwhile intervention into current discourse around big data. And though the book seems to assume a fairly academically engaged readership, this book will certainly work well in graduate seminars (or advanced undergraduate classrooms)—many of the chapter will stand quite well on their own, though much of the book’s strength is in the way the chapters work in tandem.

    One of the claims that is frequently made about big data is that—for better or worse—it will allow us to see the world from a fresh perspective. And what Big Data—A New Medium? does is allow us to see big data itself from a fresh perspective.

    _____

    Zachary Loeb earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently a PhD candidate in the History and Sociology of Science department at the University of Pennsylvania. Loeb works at the intersection of the history of technology and disaster studies, and his research focusses on the ways that complex technological systems amplify risk, as well as the history of technological doom-saying. He is working on a dissertation on Y2K. Loeb writes at the blog Librarianshipwreck, and is a frequent contributor to The b2o Review Digital Studies section.

    Back to the essay

  • Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    Alexander R. Galloway — Big Bro (Review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition)

    a review of Wendy Hui Kyun Chun, Discriminating Data Correlation, Neighborhoods, and the New Politics of Recognition (MIT Press, 2021)

    by Alexander R. Galloway

    I remember snickering when Chris Anderson announced “The End of Theory” in 2008. Writing in Wired magazine, Anderson claimed that the structure of knowledge had inverted. It wasn’t that models and principles revealed the facts of the world, but the reverse, that the data of the world spoke their truth unassisted. Given that data were already correlated, Anderson argued, what mattered was to extract existing structures of meaning, not to pursue some deeper cause. Anderson’s simple conclusion was that “correlation supersedes causation…correlation is enough.”

    This hypothesis — that correlation is enough — is the thorny little nexus at the heart of Wendy Chun’s new book, Discriminating Data. Chun’s topic is data analytics, a hard target that she tackles with technical sophistication and rhetorical flair. Focusing on data-driven tech like social media, search, consumer tracking, AI, and many other things, her task is to exhume the prehistory of correlation, and to show that the new epistemology of correlation is not liberating at all, but instead a kind of curse recalling the worst ghosts of the modern age. As Chun concludes, even amid the precarious fluidity of hyper-capitalism, power operates through likeness, similarity, and correlated identity.

    While interleaved with a number of divergent polemics throughout, the book focuses on four main themes: correlation, discrimination, authentication, and recognition. Chun deals with these four as general problems in society and culture, but also interestingly as specific scientific techniques. For instance correlation has a particular mathematical meaning, as well as a philosophical one. Discrimination is a social pathology but it’s also integral to discrete rationality. I appreciated Chun’s attention to details large and small; she’s writing about big ideas — essence, identity, love and hate, what does it mean to live together? — but she’s also engaging directly with statistics, probability, clustering algorithms, and all the minutia of data science.

    In crude terms, Chun rejects the — how best to call it — the “anarcho-materialist” turn in theory, typified by someone like Gilles Deleuze, where disciplinary power gave way to distributed rhizomes, schizophrenic subjects, and irrepressible lines of flight. Chun’s theory of power isn’t so much about tessellated tapestries of desiring machines as it is the more strictly structuralist concerns of norm and discipline, sovereign and subject, dominant and subdominant. Big tech is the mechanism through which power operates today, Chun argues. And today’s power is racist, misogynist, repressive, and exclusionary. Power doesn’t incite desire so much as stifle and discipline it. In other words George Orwell’s old grey-state villain, Big Brother, never vanished. He just migrated into a new villain, Big Bro, embodied by tech billionaires like Mark Zuckerberg or Larry Page.

    But what are the origins of this new kind of data-driven power? The reader learns that correlation and homophily, or “the notion that birds of a feather naturally flock together” (23), not only subtend contemporary social media platforms like Facebook, but were in fact originally developed by eugenicists like Francis Galton and Karl Pearson. “British eugenicists developed correlation and linear regression” (59), Chun notes dryly, before reminding us that these two techniques are at the core of today’s data science. “When correlation works, it does so by making the present and future coincide with a highly curated past” (52). Or as she puts it insightfully elsewhere, data science doesn’t so much anticipate the future, but predict the past.

    If correlation (pairing two or more pieces of data) is the first step of this new epistemological regime, it is quickly followed by some additional steps. After correlation comes discrimination, where correlated data are separated from other data (and indeed internally separated from themselves). This entails the introduction of a norm. Discriminated data are not simply data that have been paired, but measurements plotted along an axis of comparison. One data point may fall within a normal distribution, while another strays outside the norm within a zone of anomaly. Here Chun focuses on “homophily” (love of the same), writing that homophily “introduces normativity within a supposedly nonnormative system” (96).

    The third and fourth moments in Chun’s structural condition, tagged as “authenticity” and “recognition,” complete the narrative. Once groups are defined via discrimination, they are authenticated as a positive group identity, then ultimately recognized, or we could say self-recognized, by reversing the outward-facing discriminatory force into an inward-facing act of identification. It’s a complex libidinal economy that Chun patiently elaborates over four long chapters, linking these structural moments to specific technologies and techniques such as Bayes’ theorem, clustering algorithms, and facial recognition technology.

    A number of potential paths emerge in the wake of Chun’s work on correlation, which we will briefly mention in passing. One path would be toward Shane Denson’s recent volume, Discorrelated Images, on the loss of correlated experience in media aesthetics. Another would be to collide Chun’s critique of correlation in data science with Quentin Meillassoux’s critique of correlation in philosophy, notwithstanding the significant differences between their two projects.

    Correlation, discrimination, authentication, and recognition are the manifest contents of the book as it unfolds page by page. At the same time Chun puts forward a few meta arguments that span the text as a whole. The first is about difference and the second is about history. In both, Chun reveals herself as a metaphysician and moralist of the highest order.

    First Chun picks up a refrain familiar to feminism and anti-racist theory, that of erasure, forgetting, and ignorance. Marginalized people are erased from the archive; women are silenced; a subject’s embodiment is ignored. Chun offers an appealing catch phrase for this operation, “hopeful ignorance.” Many people in power hope that by ignoring difference they can overcome it. Or as Chun puts it, they “assume that the best way to fight abuse and oppression is by ignoring difference and discrimination” (2). Indeed this posture has been central to political liberalism for a long time, in for instance John Rawls’ derivation of justice via a “veil of ignorance.” For Chun the attempt to find an unmarked category of subjectivity — through that frequently contested pronoun “we” — will perforce erase and exclude those structurally denied access to the universal. “[John Perry] Barlow’s ‘we’ erased so many people,” Chun noted in dismay. “McLuhan’s ‘we’ excludes most of humanity” (9, 15). This is the primary crime for Chun, forgetting or ignoring the racialized and gendered body. (In her last book, Updating to Remain the Same, Chun reprinted a parody of a well-known New Yorker cartoon bearing the caption “On the Internet, nobody knows you’re a dog.” The posture of ignorance, of “nobody knowing,” was thoroughly critiqued by Chun in that book, even as it continues to be defended by liberals).

    Yet if the first crime against difference is to forget the mark, the second crime is to enforce it, to mince and chop people into segregated groups. After all, data is designed to discriminate, as Chun takes the better part of her book to elaborate. These are engines of difference and it’s no coincidence that Charles Babbage called his early calculating machine a “Difference Engine.” Data is designed to segregate, to cluster, to group, to split and mark people into micro identities. We might label this “bad” difference. Bad difference is when the naturally occurring multiplicity of the world is canalized into clans and cliques, leveraged for the machinations of power rather than the real experience of people.

    To complete the triad, Chun has proposed a kind of “good” difference. For Chun authentic life is rooted in difference, often found through marginalized experience. Her muse is “a world that resonates with and in difference” (3). She writes about “the needs and concerns of black women” (49). She attends to “those whom the archive seeks to forget” (237). Good difference is intersectional. Good difference attends to identity politics and the complexities of collective experience.

    Bad, bad, good — this is a triad, but not a dialectical one. Begin with 1) the bad tech posture of ignoring difference; followed by 2) the worse tech posture of specifying difference in granular detail; contrasted with 3) a good life that “resonates with and in difference.” I say “not dialectical” because the triad documents difference changing position rather than the position of difference changing (to paraphrase Catherine Malabou from her book on Changing Difference). Is bad difference resolved by good difference? How to tell the difference? For this reason I suggest we consider Discriminating Data as a moral tale — although I suspect Chun would balk at that adjective — because everything hinges on a difference between the good and the bad.

    Chun’s argument about good and bad difference is related to an argument about history, revealed through what she terms the “Transgressive Hypothesis.” I was captivated by this section of the book. It connects to a number of debates happening today in both theory and culture at large. Her argument about history has two distinct waves, and, following the contradictory convolutions of history, the second wave reverses and inverts the first.

    Loosely inspired by Michel Foucault’s Repressive Hypothesis, Chun’s Transgressive Hypothesis initially describes a shift in society and culture roughly coinciding with the Baby Boom generation in the late Twentieth Century. Let’s call it the 1968 mindset. Reacting to the oppressions of patriarchy, the grey-state threats of centralized bureaucracy, and the totalitarian menace of “Nazi eugenics and Stalinism,” liberation was found through “‘authentic transgression’” via “individualism and rebellion” (76). This was the time of the alternative, of the outsider, of the nonconformist, of the anti-authoritarian, the time of “thinking different.” Here being “alt” meant being left, albeit a new kind of left.

    Chun summons a familiar reference to make her point: the Apple Macintosh advertisement from 1984 directed by Ridley Scott, in which a scary Big Brother is dethroned by a colorful lady jogger brandishing a sledge hammer. “Resist, resist, resist,” was how Chun put the mantra. “To transgress…was to be free” (76). Join the resistance, unplug, blow your mind on red pills. Indeed the existential choice from The Matrix — blue pill for a life of slavery mollified by ignorance, red pill for enlightenment and militancy tempered by mortal danger — acts as a refrain throughout Chun’s book. In sum the Transgressive Hypothesis “equated democracy with nonnormative structures and behaviors” (76). To live a good life was to transgress.

    But this all changed in 1984, or thereabouts. Chun describes a “reverse hegemony” — a lovely phrase that she uses only twice — where “complaints against the ‘mainstream’ have become ‘mainstreamed’” (242). Power operates through reverse hegemony, she claims, “The point is never to be a ‘normie’ even as you form a norm” (34). These are the consequences of the rise of neoliberalism, fake corporate multiculturalism, Ronald Reagan and Margaret Thatcher but even more so Bill Clinton and Tony Blaire. Think postfordism and postmodernism. Think long tails and the multiplicity of the digital economy. Think woke-washing at CIA and Spike Lee shilling cryptocurrency. Think Hypernormalization, New Spirit of Capitalism, Theory of the Young Girl, To Live and Think Like Pigs. Complaints against the mainstream have become mainstreamed. And if power today has shifted “left,” then — Reverse Hegemony Brain go brrr — resistance to power shifts “right.” A generation ago the Q Shaman would have been a leftwing nut nattering about the Kennedy assassination. But today he’s a right wing nut (alas still nattering about the Kennedy assassination).

    “Red pill toxicity” (29) is how Chun characterizes the responses to this new topsy-turvy world of reverse hegemony. (To be sure, she’s only the latest critic weighing in on the history of the present; other well-known accounts include Angela Nagle’s 2017 book Kill All Normies, and Mark Fisher’s notorious 2013 essay “Exiting the Vampire Castle.”) And if libs, hippies, and anarchists had become the new dominant, the election of Donald Trump showed that “populism, paranoia, polarization” (77) could also reemerge as a kind of throwback to the worst political ideologies of the Twentieth Century. With Trump the revolutions of history — ironically, unstoppably — return to where they began, in “the totalitarian world view” (77).

    In other words these self-styled rebels never actually disrupted anything, according to Chun. At best they used disruption as a kind of ideological distraction for the same kinds of disciplinary management structures that have existed since time immemorial. And if Foucault showed that nineteenth-century repression also entailed an incitement to discourse, Chun describes how twentieth-century transgression also entailed a novel form of management. Before it was “you thought you were repressed but in fact you’re endlessly sublating and expressing.” Now it’s “you thought you were a rebel but disruption is a standard tactic of the Professional Managerial Class.” Or as Jacques Lacan said in response to some young agitators in his seminar, vous voulez un maître, vous l’aurez. Slavoj Žižek’s rendering, slightly embellished, best captures the gist: “As hysterics, you demand a new master. You will get it!

    I doubt Chun would embrace the word “hysteric,” a term indelibly marked by misogyny, but I wish she would, since hysteria is crucial to her Transgressive Hypothesis. In psychoanalysis, the hysteric is the one who refuses authority, endlessly and irrationally. And bless them for that; we need more hysterics in these dark times. Yet the lesson from Lacan and Žižek is not so much that the hysteric will conjure up a new master out of thin air. In a certain sense, the lesson is the reverse, that the Big Other doesn’t exist, that Big Brother himself is a kind of hysteric, that power is the very power that refuses power.

    This position makes sense, but not completely. As a recovering Deleuzian, I am indelibly marked by a kind of antinomian political theory that defines power as already heterogenous, unlawful, multiple, anarchic, and material. However I am also persuaded by Chun’s more classical posture, where power is a question of sovereign fiat, homogeneity, the central and the singular, the violence of the arche, which works through enclosure, normalization, and discipline. Faced with this type of power, Chun’s conclusion is, if I can compress a hefty book into a single writ, that difference will save us from normalization. In other words, while Chun is critical of the Transgressive Hypothesis, she ends up favoring the Big-Brother theory of power, where authentic alternatives escape repressive norms.

    I’ll admit it’s a seductive story. Who doesn’t want to believe in outsiders and heroes winning against oppressive villains? And the story is especially appropriate for the themes of Discriminating Data: data science of course entails norms and deviations; but also, in a less obvious way, data science inherits the old anxieties of skeptical empiricism, where the desire to make a general claim is always undercut by an inability to ground generality.

    Yet I suspect her political posture relies a bit too heavily on the first half of the Transgressive Hypothesis, the 1984 narrative of difference contra norm, even as she acknowledges the second half of the narrative where difference became a revanchist weapon for big tech (to say nothing of difference as a bonafide management style). This leads to some interesting inconsistencies. For instance Chun notes that Apple’s 1984 hammer thrower is a white woman disrupting an audience of white men. But she doesn’t say much else about her being a woman, or about the rainbow flag that ends the commercial. The Transgressive Hypothesis might be the quintessential tech bro narrative but it’s also the narrative of feminism, queerness, and the new left more generally. Chun avoids claiming that feminism failed; but she’s also savvy enough to avoid saying that it succeeded. And if Sadie Plant once wrote that “cybernetics is feminization,” for Chun it’s not so clear. According to Chun the cybernetic age of computers, data, and ubiquitous networks still orients around structures of normalization: masculine, white, straight, affluent and able-bodied. Resistant to such regimes of normativity, Chun must nevertheless invent a way to resist those who were resisting normativity.

    Regardless, for Chun the conclusion is clear: these hysterics got their new master. If not immediately they got it eventually, via the advent of Web 2.0 and the new kind of data-centric capitalism invented in the early 2000s. Correlation isn’t enough — and that’s the reason why. Correlation means the forming of a general relation, if only the most minimal generality of two paired data points. And, worse, correlation’s generality will always derive from past power and organization rather than from a reimagining of the present. Hence correlation for Chun is a type of structural pessimism, in that it will necessarily erase and exclude those denied access to the general relation.

    Characterized by a narrative poignancy and an attention to the ideological conditions of everyday life, Chun highlights alternative relations that could hopefully replace the pessimism of correlation. Such alternatives might take the form of a “potential history” or a “critical fabulation,” phrases borrowed from Ariella Azoulay and Saidiya Hartman, respectively. For Azoulay potential history means to “‘give an account of diverse worlds that persist’”; for Hartman, critical fabulation means “to see beyond numbers and sources” (79). A slim offering covering a few pages, nevertheless these references to Azoulay and Hartman indicate an appealing alternative for Chun, and she ends her book where it began, with an eloquent call to acknowledge “a world that resonates with and in difference.”

    _____

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), Laruelle: Against the Digital (University of Minnesota, 2014), and most recently, Uncomputable: Play and Politics in the Long Digital Age (Verso, 2021).

    Back to the essay

     

  • Chris Gilliard and Hugh Culik — The New Pythagoreans

    Chris Gilliard and Hugh Culik — The New Pythagoreans

    Chris Gilliard and Hugh Culik

    A student’s initiation into mathematics routinely includes an encounter with the Pythagorean Theorem, a simple statement that describes the relationship between the hypotenuse and sides of a right triangle: the sum of the squares of the sides is equal to the square of the hypotenuse, i.e., A2 + B2 = C2. The statement and its companion figure of a generic right triangle are offered as an interchangeable, seamless flow between geometric “things” and numbers (Kline 1980, 11). Among all the available theorems that might be offered as emblematic of mathematics, this one is held out as illustrative of a larger claim about mathematics and the Real. This use suggests that it is what W. J. T. Mitchell would call a “hypericon,” a visual paradigm that doesn’t “merely serve as [an] illustration to theory; [it] picture[s] theory” (1995, 49). Understood in this sense, the Pythagorean Theorem asserts a central belief of Western culture: that mathematics is the voice of an extra-human realm, a realm of fundamental, unchanging truth apart from human experience, culture, or biology. Pythagorean theoremIt is understood as more essential than the world and as prior to it. Mathematics becomes an outlier among representational systems because numbers are claimed to be “ideal forms necessarily prior to the material ‘instances’ and ‘examples’ that are supposed to illustrate them and provide their content” (Rotman 2000, 147).[1] The dynamic flow between the figure of the right triangle and the formula transforms mathematical language into something akin to Christian concepts of a prelapsarian language, a “nomenclature of essences, in which word would have reflected thing with perfect accuracy” (Eagle 2007, 184). As the Pythagoreans styled it, the world is number (Guthrie 1962, 256). The image schools the child into the culture’s uncritical faith in the rhetoric of numbers, a sort of everyman’s version of the Pythagorean vision. Whatever the general belief in this notion, the nature of mathematical representations has been a central problematic of mathematics that appears throughout its history. The difference between the historical significance of this problematic and its current manifestation in the rhetoric of “Big Data” illustrates an important cultural anxiety.

    Contemporary culture uses the Pythagorean Theorem’s image and formula as a hypericon that not only obscures problematic assumptions about the consistency and completeness of mathematics, but which also misrepresents the consistency and completeness of the material-world relationships that mathematics is used to describe.[2] This rhetoric of certainty, consistency, and completeness continues to infect contemporary political and ideological claims. For example, “Big Data” enthusiasts – venture capitalists, politicians, financiers, education reformers, policing strategists, et al. – often invoke a neo-Pythagorean worldview to validate their claims, claims that rest on the interplay of technology, analysis, and mythology (Boyd and Crawford 2012, 663). What is a highly productive problematic in the 2,500-year history of mathematics disappears into naïve assertions about the inherent “truth” of the algorithmic outputs of mathematically based technologies. When corporate behemoths like Pearson and Knewton (makers of an adaptive learning platform) participate in events such as the Department of Education’s 2012 “Datapalooza,” the claims become totalizing. Knewton’s CEO, Jose Ferreira, asserts, in a crescendo of claims, that “Knewton gets 5-10 million actionable data points per student per day”; and that tagging content “unlocks data.” In his terms, “work cascades out data” that is then subject to the various models the corporation uses to predict and prescribe the future. His claims of descriptive completeness are correct, he asserts, because “everything in education is correlated to everything else” (November 2012). The narrative of Ferreira’s claims is couched in fluid equivalences of data points, mathematical models, and a knowable future. Data become a metonym for not only the real student, but for the nature of learning and human cognition. In a sort of secularized predestination, the future’s origin in perfectly representational numbers produces perfect predictions of students’ performance. Whatever the scale of the investment dollars behind these New Pythagoreans, such claims lose their patina of objective certainty when placed in the history of the West’s struggle with mathematized claims about a putative “real.” For them, predictions are not the outcomes of processes; rather, predictions are revelations of a deterministic reality.[3]

    A recent claim for a facial-recognition algorithm that identifies criminals normalizes its claims by simultaneously asserting and denying that “in all cultures and all periods of recorded human history, [is] the belief that the face alone suffices to reveal innate traits of a person” (Wu, Xiaolin, and Xi Zhang 2016, 1) The authors invoke the Greeks:

    Aristotle in his famous work Prior Analytics asserted, ‘It is possible to infer character from features, if it is granted that the body and the soul are changed together by the natural affections’ (1)

    The authors then remind readers that “the same question has captivated professionals (e.g., psychologists, sociologists, criminologists) and amateurs alike, across all cultures, and for as long as there are notions of law and crime. Intuitive speculations are abundant both in writing . . . and folklore.” Their work seeks to demonstrate that the question yields to a mathematical model, a model that is specifically a non-human intelligence: “In this section, we try to answer the question in the most mechanical and scientific way allowed by the available tools and data. The approach is to let a machine learning method explore the data and reveal the most discriminating facial features that tell apart criminals and non-criminals” (6). The rhetoric solves the problem by asserting an unchanging phenomenon – the criminal face – by invoking a mathematics that operates via machine learning. Problematic crimes such as “DWB” (driving while black) disappear along with history and social context.

    Such claims rest on confused and contradictory notions. For the Pythagoreans, mathematics was not a representational system. It was the real, a reality prior to human experience. This claim underlies the authority of mathematics in the West. But simultaneously, it effectively operates as a response to the world, i.e., it is a re-presentation. As re-presentational, it becomes another language, and like other languages, it is founded on bias, exclusions, and incompleteness. These two notions of mathematics are resolved by seeing the representation as more “real” than the multiply determined events it re-presents. Nonetheless, once we say it re-presents the real, it becomes just another sign system that comes after the real. Often, bouncing back and forth between its extra-human status and its representational function obscures the places where representation fails or becomes an approximation. To data fetishists, “data” has a status analogous to that of “number” in the Pythagorean’s world. For them, reality is embedded in a quasi-mathematical system of counting, measuring, and tagging. But the ideological underpinnings, pedagogical assumptions, and political purposes of the tagging go unremarked; to do so would problematize the representational claims. Because the world is number, coders are removed from the burden of history and from the responsibility to examine the social context that both creates and uses their work.

    The confluence of corporate and political forces validates itself through mathematical imagery, animated graphics, and the like. Terms such as “data-driven” and “evidence-based” grant the rhetoric of numbers a power that ignores its problematic assumptions. There is a pervasive refusal to recognize that data are artifacts of the descriptive categories imposed on the world. But “Big Data” goes further; the term is used in ways that perpetuate the antique notion of “number” by invoking numbers as distillations of certainty and a knowable universe. “Number” becomes decontextualized and stripped of its historical, social, and psychological origins. Because the claims of Big Data embed residual notions about the re-presentational power of numbers, and about mathematical completeness and consistency, they speak to such deeply embedded beliefs about mathematics, the most fundamental of which is the Pythagorean claim that the world is number. The point is not to argue whether mathematics is formal, referential, or psychological; rather, it is to place contemporary claims about “Big Data” in historical and cultural contexts where such issues are problematized. The claims of Big Data speak through a language whose power rests on longstanding notions of mathematics; however, these notions lose some of their power when placed in the context of mathematical invention (Rotman 2000, 4-7).

    “Big Data” represents a point of convergence for residual mathematical beliefs, beliefs that obscure cultural frameworks and thus interfere with critique. For example, predictive policing tools are claimed to produce neutral, descriptive acts using machine intelligence. Berk asserts that “if you let the computer just snoop around in the dataset, it finds things that are unexpected by existing theory and works really substantially well to help forecast” (Berk 2011). In this view, Big Data – the numerical real – can be queried to produce knowledge that is not driven by any theoretical or ideological interest. Precisely because the world is presumed to be mathematical, the political, economic, and cultural frameworks of its operation can become the responsibility of the algorithm’s users. To this version of a mathematized real, there is no inherently ethical algorithmic action prior to the use of its output. Thus, the operation of the algorithm is doubly separated from its social contexts. First, the mathematics themselves are conceived as autonomous embodiments of a reality independent of the human; second, the effects of the algorithm – its predictions – are apart from values, beliefs, and needs that create the algorithm. The specific limits of historical and social context do not mathematically matter; the limits are determined by the values and beliefs of the algorithm’s users. The problematics of mathematizing the world are passed off to its customers. Boyd and Crawford identify three interacting phenomena that create the notion of Big Data: technology, analysis, and mythology (2012, 663). The mythological element embodies both dystopian and utopian narratives, and thus how we categorize reality. O’Neil notes that “these models are constructed not just from data but from the choices we make about which data to pay attention to – and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral” (2016, 218). On one hand, the predictive value depends on the moral, ethical, and political values of the user, a non-mathematical question. On the other hand, this division between the model and its application carves out a special arena where the New Pythagoreans claim that it operates without having to recognize social or historical contexts.

    Whatever their commitment to number, the Pythagoreans were keenly aware that their system was vulnerable to discoveries that problematized their basic claim that the world is number. And they protected their beliefs through secrecy and occasionally through violence. Like the proprietary algorithms of contemporary corporations, their work was reserved for a circle of adepts/owners. First among their secrets was the keen understanding that an unnamable point on the number line would represent a rupture in the relationship of mathematics and world. If that relationship failed, with it would go their basis for belief in a knowable world. Their claims arose from within the concrete practices of Greek mathematics. For example, the Greeks portrayed numbers by a series of dots called Monads. The complex ratios used to describe geometric figures were understood to generate the world, and numbers were visualized in arrangements of stones (calculi). A 2 x 2 arrangement of stones had the form of a square, hence the term “square numbers.” Thus, it was a foundational claim that any point or quantity (because monads were conceived as material objects) have a corresponding number. Line segments, circumferences, and all the rest had to correspond to what we still call the “rational numbers”: 1, 2, 3 . . . and their ratios. Thus, the Pythagorean’s great claim – that the world is number – was vulnerable to the discovery of a point on the number line that could not be named as the ratio of integers.

    Unfortunately for their claim, such numbers are common, and the great irony of the Pythagorean Theorem lies in the fact that it routinely generates numbers that are not ratios of integers. For example, a right triangle with sides one-unit long has a hypotenuse √2 units long (12 + 12 = C2 i.e., 2 = C2 i.e., √2 = C). Numbers such as √2 contradict the mathematical aspiration toward a completely representational system because they cannot be expressed as a ratio of integers, and hence their status as what are called “ir-rational” numbers.[4] A relatively simple proof demonstrates that they are neither odd nor even; these numbers exist in what is called a “surd” relationship to the integers, that is, they are silent – the meaning of “surd” – about each other. They literally cannot “speak” to each other. To the Pythagoreans, this appeared as a discontinuity in their naming system, a gap that might be the mark of a world beyond the generative power of number. Such numbers are, in fact, a new order of naming precipitated by the limited representational power of the prior naming system based on real numbers. But for the Pythagoreans, to look upon these numbers was to look upon the void, to discover that the world had no intrinsic order. Irrational numbers disrupted the Pythagorean project of mathematizing reality. This deeply religious impulse toward order underlies the aspiration that motivates the bizarre and desperate terminologies of contemporary data fetishists: “data-driven,” “evidence-based,” and even “Big Data,” which is usually capitalized to show the reification of number it desires.

    Big Data appeals to a mathematical nostalgia for certainty that cannot be sustained in contemporary culture. O’Neil provides careful examples of how history, social context, and the data chosen for algorithmic manipulation do not – indeed cannot – matter in this neo-Pythagorean world. Like Latour, she historicizes the practices and objects that the culture pretends are natural. The ideological and political nature of the input becomes invisible, especially when algorithms are granted special proprietary status that converts them to what Pasquale calls a “black box” (2016). It is a problematic claim, but it can be made without consequence because it speaks in the language of an ancient mathematical philosophy still heard in our culture,[5] especially in education where the multifoliate realities of art, music, and critical writing are quashed by forces such as the Core Curriculum and its pervasive valorization of standardization. Such strategies operate in fear of the inconsistency and incompleteness of any representational relationship, a fear of epistemological silence that has lurked in the background of Western mathematics from its beginnings. To the Greeks, the irrationals represented a sort of mathematical aphasia. The irrational numbers such as √2 thus obtained emblematic values far beyond their mathematical ones. They inserted an irremediable gap between the world and the “word” of mathematics. Such knowledge was catastrophic – adepts were murdered for revealing the incommensurability of side and diagonal.[6] More importantly, the discovery deeply fractured mathematics itself. The gap in the naming system split mathematics into algebra (numerical) and geometry (spatial), a division that persisted for almost 2,000 years. Little wonder that the Greeks restricted geometry to measurements that were not numerical, but rather were produced through the use of a straightedge and compass. Physical measurement by line segments and circles rather than by a numerical length effectively sidestepped the threat posed by the irrational numbers. Kline notes, “The conversion of all of mathematics except the theory of whole numbers into geometry . . . forced a sharp separation between number and geometry . . . at least until 1600” (1980, 105). Once we recognize that the Pythagorean theorem is a hypericon, i.e., a visual paradigm that visualizes theory, we begin to see its extension into other fundamental mathematical “discoveries” such as Descartes’s creation of coordinate geometry. A deep anxiety about the gap between word and world is manifested in both mathematics as well as in contemporary claims about “Big Data.”

    The division between numerical algebra and spatial geometry remained a durable feature of Western mathematics until problematized by social change. Geometry offered an elegant axiomatic system that satisfied the hierarchical impulse of the culture, and it worked in concert with the Aristotelian logic that dominated notions of truth. The Aristotelian nous and the Euclidian axioms seemed similar in ways that justified the hierarchical structure of the church and of traditional politics. They were part of a social fabric that bespoke an extra-human order that could be dis-covered. But with the rise of commercial culture came the need for careful records, computations, risk assessments, interest calculations, and other algebraic operations. The tension between algebra and geometry became more acute and visible. It was in this new cultural setting that Descartes’s work appeared. Descartes’s 1637 publication of La Géométrie confronted the terrors revealed in the irrationals embodied in the geometry/algebra divide by subordinating both algebra and geometry to a more abstract relationship. Turchin notes that Descartes re-unified geometry and arithmetic not by granting either priority or reducing either to the other; rather, in his language “the symbols do not designate number or quantities, but relations of quantities” (Turchin 1977, 196).

    Rotman directly links concepts of number to this shifting relationship of algebra and geometry and even to the status of numbers such as zero:

    During the fourteenth century, with the emergence of mercantile / capitalism in Northern Italy, the handling of numbers passed . . . to merchants, artisan-scientists, architects . . . for whom arithmetic was an essential prerequisite for trade and technology . . . . The central role occupied by double-entry book-keeping (principle of the zero balance) and the calculational demands of capitalism broke down any remaining resistance to the ‘infidel symbol’ of zero. (1987, 7-8)

    The emergence of the zero is an index to these changes, not the revelation of a pre-existing, extra-human reality. Similarly, Alexander’s history of the calculus places its development in the context of Protestant notions of authority (2014, 140-57). He emphasizes that the methodologies of the sciences and mathematics began to serve as political models for scientific societies: “if reasonable men of different backgrounds and convictions could meet to discuss the workings of nature, why could they not do the same in matters that concerned the state?” (2014, 249). Again, in the case of the calculus, mathematics responds to the emerging forces of the Renaissance: individualism, capitalism, and Protestantism. Certainly, the ongoing struggle with irrational numbers extends from the Greeks to the Renaissance, but the contexts are different. For the Greeks, the generative nature of number was central. For 17th Century Europe, the material demands of commercial life converged with religious, economic, and political shifts to make number a re-presentational tool.

    The turmoil of that historical moment suggests the turmoil of our own era in the face of global warfare, climate change, over-population, and the litany of other catastrophes we perpetually await.[7] In both cases, the anxiety produces impulses to mathematize the world and thereby reveal a knowable “real.” The current corporate fantasy that the world is a simulation is the fantasy of non-mathematicians (Elon Musk and Sam Altman) to embed themselves in a techno-centric narrative of the power of their own tools to create themselves. While this inexpensive version of Baudrillard’s work might seem sophomoric, it nevertheless exposes the impulse to contain the visceral fear that a socially constructed world is no different from solipsism’s chaos. It seems a version of the freshman student’s claim that “Everything’s just opinion” or the plot of another Matrix film. They speak/act/claim that their construction of meaning is equal to any other — the old claim that Hitler and Mother Teresa are but two equally valid “opinions”. They don’t know that the term/concept is social construction, and their radical notions of the individual prevent them from recognizing the vast scope, depth, and stabilizing power of social structures. They are only the most recent example of how social change exacerbates the misuse of mathematics.

    Amid these sorts of epistemic shifts, Renaissance mathematics underwent its own transformations. Within a fifty-year span (1596-1646), Descartes, Newton, and Leibniz are born. Their major works appear, respectively, in 1637, 1666, and 1675, a burst of innovation that cannot be separated from the shifts in education, economics, religion, and politics that were then sweeping Europe. Porter notes that statistics emerges alongside the rising modern state of this era. Managing the state’s wealth required profiles of populations. Such mathematical profiling began in the mid-1600s, with the intent to describe the state’s wealth and human resources for the creation of “sound, well-informed state policy” (Porter 1986, 18). The notion of probabilities, samples, and models avoids the aspirations that shaped earlier mathematics by making mathematics purely descriptive. Hacking suggests that the delayed appearance of probability arises from five issues: 1) an obsession with determinism and personal fatalism; 2) the belief that God spoke through randomization and thus, a theory of the random was impious; 3) the lack of equiprobable events provided by standardized objects, e.g., dice; 4) the lack of economic drivers such as insurances and annuities; and 5) the lack of a workable calculus needed for the computation of probability distributions (Davis and Hersh 1981, 21). Hacking finds these insufficient and suggests that as authority was relocated in nature and not in the words of authorities, this led to the observation of frequencies.[8] Alongside the fierce opposition of the Church to the zero, understood as the absence of God, and to the calculus, understood as an abandonment of material number, the shifting mathematical landscape signals the changes that began to affect the longstanding status of number as a sort of prelapsarian language.

    Mathematics was losing its claims to completeness and consistency, and the incommensurables problematized that. Newton and Leibniz “de-problematized” irrationals, and opened mathematics to a new notion of approximation. The central claims about mathematics were not disproved; worse, they were set aside as unproductive conflations of differences between the continuous and the discrete. But because the church saw mathematics as “true” in a fashion inextricable from other notions of the truth, it held a special status. Calculus became a dangerous interest likely to call the Inquisition to action. Alexander locates the central issue as the irremediable conflict between the continuous and the discrete, something that had been the core of Zeno’s paradoxes (2014). The line of mathematical anxieties stretches from the Greeks into the 17th Century. These foundational understandings seem remote and abstract until we see how they re-appear in the current claims about the importance of “Big Data.” The term legitimates its claims by resonating with other responses to the anxiety of representation.

    The nature of the hypericon perpetuates the notion of a stable, knowable reality that rests upon a non-human order. In this view, mathematics is independent of the world. It existed prior to the world and does not depend on the world; it is not an emergent narrative. The mathematician discovers what is already there. While this viewpoint sees mathematics as useful, mathematics is prior to any of its applications and independent of them. The parallel to religious belief becomes obvious if we substitute the term “God” for “mathematics”; the notions of a self-existing, self-knowing, and self-justifying system are equally applicable (Davis and Hersh 1981, 232-3). Mathematics and religion share in a fundamental Western belief in the Ideal. Taken together, they reveal a tension between the material and the eternal that can be mediated by specific languages. There is no doubt that a simplified mathematics serves us when we are faced with practical problems such as staking out a rectangular foundation for a house, but beyond such short-term uses lie more consequential issues, e.g., the relation of the continuous and the discrete, and between notions of the Ideal and the socially constructed. These larger paradoxes remain hidden when assertions of completeness, consistency, and certainty go unchallenged. In one sense, the data fetishists are simply the latest incarnation of a persistent problem: understanding mathematics as culturally situated.

    Again, historicizing this problem addresses the widespread willingness to accept their totalistic claims. And historicizing these claims requires a turn to established critical techniques. For example, Rotman’s history of the zero turns to Derrida’s Of Grammatology to understand the forces that complicated and paralyzed the acceptance of zero into Western mathematics (1987). He turns to semiotics and to the work of Ricoeur to frame his reading of the emergence of the zero in the West during the Renaissance. Rotman, Alexander, desRaines, and a host of mathematical historians recognize that the nature of mathematical authority has evolved. The evolution lurks in the role of the irrational numbers, in the partial claims of statistics, and in the approximations of the calculus. The various responses are important as evidence of an anxiety about the limits of representation. The desire to resolve such arguments seems revelatory. All share an interest in the gap between the aspirations of systematic language and its object: the unnamable. That gap is iconic, an emblem of its limits and the functions it plays in the generation of novel responses to the threat of an inarticulable void; its history exposes the powerful attraction of the claims made for Big Data.

    By the late 1800s, questions of systematic completeness and consistency grew urgent. For example, they appeared in the competing positions of Frege and Hilbert, and they resonated in the direction David Hilbert gave to 20th Century mathematics with his famed 23 questions (Blanchette 2014). The second of these specifically addressed the problem of proving that mathematical systems could be both complete and consistent. This question deeply influenced figures such as Bertrand Russell, Ludwig Wittgenstein, and others.[9] Hilbert’s question was answered in 1931 by Gödel’s theorems that demonstrated the inherent incompleteness and inconsistency of arithmetic systems. Gödel’s first theorem demonstrated that axiomatic systems would necessarily have true statements that could be neither proven nor disproven; his second theorem demonstrated that such systems would necessarily be inconsistent. While mathematicians often take care to note that his work addresses a purely mathematical problem, it nevertheless is read metaphorically. As a metaphor, it connects the problematic relationship of natural and mathematical languages. This seems inevitable because it led to the collapse of the mathematical aspiration for a wholly formal language that does not require what is termed ‘natural’ language, that is, for a system that did not have to reach outside of itself. Just as John Craig’s work exemplifies the epistemological anxieties of the late eighteenth century,[10] so also does Gödel’s work identify a sustained attempt of his own era to demonstrate that systematic languages might be without gaps.

    Gödel’s theorems rely on a system that creates specialized numbers for symbols and the operations that relate them. This second-order numbering enabled him to move back and forth between the logic of statements and the codes by which they were represented. His theorems respond to an enduring general hope for complete and consistent mappings of the world with words, and each embeds a representational failure.  Craig was interested in the loss of belief in the gospels; Pythagoras feared the gaps in the number line represented by the irrational numbers, and Gödel identified the incompleteness and inconsistency of axiomatic systems. To the dominant mathematics of the early 20th Century, the value of the question to which Gödel addresses himself lies in the belief that an internally complete mathematical map would be the mark of either of two positions: 1) the purely syntactic orderliness of mathematics, one that need not refer to any experiential world (this is the position of Frege, Russell, and Hilbert); or 2) the emergence of mathematics alongside concrete, human experience. Goldstein argues that these two dominant alternatives of the late eighteenth and early twentieth centuries did not consider the aprioricity of mathematics to constitute an important question, but Gödel offered his theorems as proofs that served exactly that idea. His demonstration of incompleteness does not signal a disorderly cosmos; rather, it argues that there are arithmetic truths that lie outside of formalized systems; as Goldstein notes, “the criteria for semantic truth could be separated from the criteria for provability” (2006, 51). This was an argument for mathematical Platonism. Goldstein’s careful discussion of the cultural framework and the meta-mathematical significance of Gödel’s work emphasizes that it did not argue for the absence of any extrinsic order to the world (51). Rather, Gödel was consciously demonstrating the defects in a mathematical project begun by Frege, addressed in the work of Russell and Whitehead, and enshrined by Hilbert as essential for converting mathematics into a profoundly isolated system whose orderliness lay in its internal consistency and completeness.[11] Similarly, his work also directly addressed questions about the a priori nature of mathematics challenged by the Vienna Circle. Paradoxically, by demonstrating that a foundational system – arithmetic – was not consistent and complete, the argument that mathematics was simply a closed, self-referential system could be challenged and opened to meta-mathematical claims about epistemological problems.

    Gödel’s work, among other things, argues for essential differences between human thought and mathematics. Gödel’s work has become imbricated in a variety of discourses about representation, the nature of the mind, and the nature of language. Goldstein notes:

    The structure of Gödel’s proof, the use it makes of ancient paradox [the liar’s paradox], speaks at some level, if only metaphorically, to the paradoxes in the tale that the twentieth century told itself about some of its greatest intellectual achievements – including, of course, Gödel’s incompleteness theorems. Perhaps someday a historian of ideas will explain the subjectivist turn taken by so many of the last century’s most influential thinkers, including not only philosophers but hard-core scientists, such as Heisenberg and Bohr. (2006, 51)

    At the least, his work participated in a major consideration of three alternative understandings of symbolic systems: as isolated, internally ordered syntactic systems, as accompaniments of experience in the material world, or as the a priori realities of the Ideal. Whatever the immensely complex issues of these various positions, Gödel is the key meta-mathematician/logician whose work describes the limits of mathematical representation through an elegant demonstration that arithmetic systems – axiomatic systems – were inevitably inconsistent and incomplete. Depending on one’s aspirations for language, this is either a great catastrophe or an opening to an infinite world of possibility where the goal is to deploy a paradoxical stance that combines the assertion of meaning with its cancellation. This double position addresses the problem of representational completeness.

    This anxiety became acute during the first half of the twentieth century as various discourses deployed strategies that exploited this heightened awareness of the intrinsic incompleteness and inconsistency of systematic knowledge. Whatever their disciplinary differences – neurology, psychology, mathematics – they nonetheless shared the sense that recognizing these limits was an opportunity to understand discourse both from within narrow disciplinary practices and from without in a larger logical and philosophical framework that made the aspiration toward completeness quaint, naïve, and unproductive. They situated the mind as a sort of boundary phenomenon between the deployment of discourses and an extra-linguistic reality. In contrast to the totalistic claims of corporate spokesmen and various predictive software, this sensibility was a recognition that language might always fail to re-present its objects, but that those objects were nonetheless real and expressible as a function of the naming process viewed from yet another position. An important corollary was that these gaps were not only a token for the interplay of word and world, but were also an opportunity to illuminate the gap itself. In short, symbol systems seemed to stand as a different order of phenomena than whatever they proposed to represent, and the result was a burst of innovative work across a variety of disciplines.

    Data enthusiasts sometimes participate in a discredited mathematics, but they do so in powerfully nostalgic ways that resonate with the amorphous Idealism infused in our hierarchical churches, political structures, aesthetics, and epistemologies. Thus, Big Data enthusiasts speak through the residue of a powerful historical framework to assert their own credibility. For these New Pythagoreans, mathematics remains a quasi-religious undertaking whose complexity, consistency, sign systems, and completeness assert a stable, non-human order that keeps chaos at bay. However, they are stepping into an issue more fraught than simply the misuses and misunderstanding of the Pythagorean Theorem. The historicized view of mathematics and their popular invocation of mathematics diverge at the point that anxieties about the representational failure of languages become visible. We not only need to historicize our understanding of mathematics, but also to identify how popular and commercial versions of mathematics are nostalgic fetishes for certainty, completeness, and consistency. Thus, the authority of algorithms has less to do with their predictive power than their connection to a tradition rooted in the religious frameworks of Pythagoreanism. Critical methods familiar to the humanities – semiotics, deconstruction, psychology – build a sort of critical braid that not only re-frames mathematical inquiry, but places larger question about the limits of human knowledge directly before us; this braid forces an epistemological modesty that is eventually ethical and anti-authoritarian in ways that the New Pythagoreans rarely are.

    Immodest claims are the hallmark of digital fetishism, and are often unabashedly conscious. Chris Anderson, while Editor-in-Chief of Wired magazine, infamously argued that “the data deluge makes the scientific method obsolete” (2008). He claimed that distributed computing, cloud storage, and huge sets of data made traditional science outmoded. He asserted that science would become mathematics, a mathematical sorting of data to discover new relationships:

    At the petabyte scale, information is not a matter of simple three and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later.

    “Agnostic statistics” would be the mechanism that for precipitating new findings. He suggests that mathematics is somehow detached from its contexts and represents the real through its uncontaminated formal structures. In Anderson’s essay, the world is number. This neo-Pythagorean claim quickly gained attention, and then wilted in the face of scholarly response such as that of Pigliucci (2009, 534).

    Anderson’s claim was both a symptom and a reinforcement of traditional notions of mathematics that extend far back into Western history. Its explicit notions of mathematics stirred two kinds of anxiety: one reflected a fear of a collapsed social project (science) and the other reflected a desperate hunger for a language – mathematics – that penetrated the veil drawn across reality and made the world knowable. Whatever the collapse of his claim, similar ones such as those of the facial phrenologists continue to appear. Without history – mathematical, political, ideological – “data” acquires a material status much as number did for the Greeks, and this status enables statements of equality between the messiness of reality and the neatness of formal systems. Part of this confusion is a common misunderstanding of the equals sign in popular culture. The “sign” is a relational function, much as the semiotician’s signified and signifier combine to form a “sign.” However, when we mistreat treat the “equals sign” as a directional, productive operation, the nature of mathematics loses its availability to critique. It becomes a process outside of time that generates answers by re-presenting the real in a language. Where once a skeptical Pythagorean might be drowned for revealing the incommensurability of side and diagonal, proprietary secrecy now threatens a sort of legalized financial death for those who violate copyright (Pasquale 2016, 142). Pasquale identifies the “creation of invisible powers” as a hallmark of contemporary, algorithmic culture (2016, 193). His invaluable work recovers the fact that algorithms operate in a network of economic, political, and ideological frameworks, and he carefully argues the role of legal processes in resisting the control that algorithms can impose on citizens.

    Pasquale’s language is not mathematical, but it shares with scholars like Rotman and Goldstein an emphasis on historical and cultural context. The algorithm is made accountable if we think of it as an act whose performance instantiates digital identities through powerful economic, political, and ideological narratives. The digitized individual does not exist until it becomes the subject of such a performance, a performance which is framed much as any other performance is framed: by the social context, by repetition, and through embodiment. Digital individuals come into being when the algorithmic act is performed, but they are digital performances because of the irremediable gap between any object and its re-presentation. In short, they are socially constructed. This would be of little import except that these digital identities begin as proxies for real bodies, but the diagnoses and treatments are imposed on real, social, psychological, flesh beings. The difference between digital identity and human identity can be ignored if the mathematized self is isomorphic with the human self. Thus, algorithmic acts entangle the input > algorithm > output sequence by concealing layers of problematic differences: digital self and human self; mathematics and the Real; test inputs and test outputs, scaling, and input and output. The sequence loses its tidy sequential structure when we recognize that the outputs are themselves data and often re-enter the algorithm’s computations by their transfer to third parties whose information returns for re-processing. A somewhat better version of the flow would be data1 > algorithm > output > data2 > algorithm > output > data3. . . . with the understanding that any datum might re-enter the process. The sequence suggests how an object is both the subject of its context and a contributor to that context. The threat of a constricting output looms precisely because there is a decreasing room for what de Certeau calls “le perruque” (1988, 25), i.e, the inefficiencies where unplanned innovation appears. And like any text, it requires a variety of analytic strategies.

    We have learned to think of algorithms in directional terms. We understand them as transformative processes that operate upon data sets to create outputs. The problematic relationships of data > algorithm > output become even more visible when we recognize that data sets have already been collected according to categories and processes that embody political, economic, and ideological biases. The ideological origin of the collected data – the biases of the questions posed in order to generate “inputs” – are yet another kind of black box, a box prior to the black box of the algorithm, a prior structure inseparable from the algorithm’s hunger for (using the mathematicians’ language) a domain upon which it can act to produce a range of results. The nature of the algorithm controls what items from the domain (data set) can be used, and on the other hand, the nature of the data set controls what the algorithm has available to act upon and transform into descriptive and prescriptive claims. The inputs are as much a black box as the algorithm itself. Thus, opaque algorithms operate upon opaque data sets (Pasquale 2016, 204) in ways that nonetheless embody the inescapable “politics of large numbers” that is the topic of Desrosières and Naish’s history of statistical reasoning (2002). This interplay forces us to recognize that the algorithm inherits biases, and that then they are compounded by operations within these two algorithmic boxes to become doubly biased outputs. It might be more revelatory to term the algorithmic process as “stimuli” > algorithm > “responses.” Re-naming “input” as “stimuli” emphasizes the selection process that precedes the algorithmic act; re-naming “output” as “response” establishes the entire process as human, cultural, and situated. This is familiar territory to psychology. Digital technologies are texts whose complexity emerges when approached using established tools for textual analysis. Rotman and other mathematicians directly state their use of semiotics. They turn to phenomenology to explicate the reader/writer interaction, and they approach mathematical texts with terms like narrator, self-referential and recursion. Most of all, they explore the problem of mathematical representation when mathematics itself is complicated by its referential, formal, and psychological statuses.

    The fetishization of mathematics is a fundamental strategy for exempting digital technologies from theory, history, and critique. Two responses are essential: first, to clarify the nostalgic mathematics at work in the mathematical rhetoric of Big Data and its tools; and second, to offer analogies that step beyond naïve notions of re-presentation to more productive critiques. Analogy is essential because analogy is itself a performance of the anti-representational claim that digital technologies need to be understood as socially constructed by the same forces that instantiate any technology. Bruno Latour frames the problem of the critical stance as three-dimensional:

    The critics have developed three distinct approaches to talking about our world: naturalization, socialization and deconstruction . . . . When the first speaks of naturalized phenomena, then societies, subjects, and all forms of discourse vanish. When the second speaks of fields of power, then science, technology, texts, and the contents of activities disappear. When the third speaks of truth effects, then to believe in the real existence of brain neurons or power plays would betray enormous naiveté. Each of these forms of criticism is powerful in itself but impossible to combine with the other. . . . Our intellectual life remains recognizable as long as epistemologists, sociologists, and deconstructionists remain at arm’s length, the critique of each group feeding on the weaknesses of the other two. (1993, 5-6)

    Latour then asks, “Is it our fault if the networks are simultaneously real, like nature, narrated, like discourse, and collective like society?” (6). He goes on to assert, “Analytic continuity has become impossible” (7). Similarly, Rotman’s history of the zero finds that the concept problematizes the hope that a “field of entities” exists prior to “the meta-sign which both initiates the signifying system and participates within it as a constituent sign”; he continues, “the simple picture of an independent reality of objects providing a pre-existing field of referents for signs conceived after them . . . cannot be sustained” (1987, 27). Our own approach is heterogeneous; we use notions of fetish, re-presentation, and Gödelian metaphor to try and bypass the critical immunity conferred on digital technologies by the naturalistic mathematical claims that immunize it against critique.

    Whether we use Latour’s description of the mutually exclusive methods of talking about the world – naturalization, socialization, deconstruction – or if we use Rotman’s three starting points for the semiotic analysis of mathematical signs – referential, formal, and psychological – we can contextualize the claims of the Big Data fetishists so that the manifestations of Big Data thinking – policing practices, financial privilege, educational opportunity – are not misrepresented as only a mathematical/statistical question about assessing the results of supposedly neutral interventions, decisions, or judgments. If we are confined to those questions, we will only operate within the referential domains described by Rotman or the realm of naturalization described by Latour. The claims of an a-contextual validity violate the consequence of their contextual status by claiming that operations, uses, and conclusions are exempt from the aggregated array of partial theorizations, applied, in this case, to mathematics. This historical/critical application reveals the contradictory world concealed and perpetuated by the corporatized mathematics of contemporary digital culture. However, deploying a constellation of critical methods – historical, semiotic, psychological – prevents the critique from falling prey to the totalism that afflicts the thinking of these New Pythagoreans. This array includes concepts such as fetishization from the pre-digital world of psychoanalysis.

    The concept of the fetish has fallen on hard times as the star of psychoanalysis sinks into the West’s neurochemical sea. But its original formulation remains useful because it seeks to address the gap between representational formulas and their objects. For example – drawing on the quintessential heterosexual, male figure who is central to psychoanalysis – the male shoe fetishist makes no distinction between a pair of Louboutins and the “normal” object of his sexual desire. Fenichel asserts (1945, 343) that such fetishization is “an attempt to deny a truth known simultaneously by another part of the personality,” and enables the use of denial. Such explanations may seem quaint, but that is not the point. The point is that within one of the most powerful metanarratives of the past century – psychoanalysis – scientists faced the contorted and defective nature of human symbolic behavior in its approach to a putative “real.” The fetish offers an illusory real that protects the fetishist against the complexities of the real. Similarly, the New Pythagoreans of Big Data offer an illusory real – a misconstrued mathematics – that often paralyzes resistance to their profit-driven, totalistic claims. In both cases, the fetish becomes the “real” while simultaneously protecting the fetishist from contact with whatever might be more human and more complex.

    Wired Magazine’s “daily fetish” seems an ironic reversal of the term’s functional meaning. Its steady stream of technological gadgets has an absent referent, a hyperreal as Baudrillard styles it, that is exactly the opposite of the material “real” that psychoanalysis sees as the motivation of the fetish. In lived life, the anxiety is provoked by the real; in digital fetishism, the anxiety is provoked by the absence of the real. The anxiety of absence provokes the frenzied production of digital fetishes. Their inevitable failure – because representation always fails – drives the proliferation of new, replacement fetishes, and these become a networked constellation that forms a sort of simulacrum: a model of an absence that the model paradoxically attempts to fill. Each failure accentuates the gap, thereby accentuating the drive toward yet another digital embodiment of the missing part. Industry newsletters exemplify the frantic repetition required by this worldview. For example, Edsurge proudly reports an endless stream of digital edtech products, each substituting for the awkward, fleshly messiness of learning. And each substitution claims to validate itself via mathematical claims of representation. And almost all fade away as the next technology takes its place. Endless succession.

    This profusion of products clamoring to be the “real” object suggests a sort of cultural castration anxiety, a term that might prove less outmoded if we note the preponderance of males in the field who busily give birth to objects with the characteristics of the living beings they seek to replace.[12] The absence at the core of this process is the unbridgeable gap between word and world. Mathematics is especially useful to such strategies because it is embedded in the culture as both the discoverer and validator of objective true/false judgments. These statements are understood to demonstrate a reality that “exists prior to the mathematical act of investigating it” (Rotman 2000, 6). It provides the certainty, the “real” that the digital fetish simultaneously craves and fears. Mathematics short-circuits the problematic question that drives the anxiety about a knowable “real.” The point here is not to revive psychoanalytic thinking, but rather to see how an anxiety mutates and invites the application of critical traditions that themselves embody a response to the incompleteness and inconsistency of sign systems. The psychological model expands into the destabilized social world of digital culture.

    The notion of mathematics as a complete and consistent equivalent of the real is a longstanding feature of Western thought. It both creates and is created by the human need for a knowable real. Mathematics reassures the culture because its formal characteristics seem to operate without referents in the real world, and thus its language seems to become more real than any iteration of its formal processes. However, within mathematical history, the story is more convoluted, in part because of the immense practical value of applied mathematics. While semiotic approaches to the history engage and describe the social construction of mathematics, an important question remains about the completeness and consistency of mathematical systems. The history of this concern connects both the technical question and the popular interest in the power of languages – natural and/or mathematical – to represent the real. Again, these are not just technical, expert questions; they leak into popular metaphor because they embody a larger cultural anxiety about a knowable real. If Pythagorean notions have affected the culture for 2500 years, we want to claim that contemporary culture embodies the anxiety of uncertainty that is revealed not only in its mathematics, but also in the contemporary arguments about algorithmic bias, completeness, and consistency.

    The nostalgia for a fully re-presentational sign system becomes paired with the digital technologies – software, hardware, networks, query strategies, algorithms, black boxes – that characterize daily life. However, this nostalgic rhetoric has a naïveté that embodies the craving for a stable and knowable external world. The culture often responds to it through objects inscribed with the certainty imputed to mathematics, and thus these digital technologies are felt to satisfy a deeply felt need. The problematic nature of mathematics matters little in terms of personalized shopping choices or customizing the ideal playlist. Although these systems rarely achieve the goal of “knowing what you want before you want it,” we rarely balk at the claim because the stakes are so low. However, where these claims have life-altering, and in some cases life and death implications – education, policing, health care, credit, safety net benefits, parole, drone targets – we need to understand them so they can be challenged, and where needed, resisted. Resistance addresses two issues:

    1. That the traditional mystery and power of number seem to justify the refusal of transparency. The mystified tools point upward to the supposed mysterium of the mathematical realm.
    2. That the genuflection before the mathematical mysterium has an insatiable hunger for illustrations that show the world is orderly and knowable.

    Together, these two positions combine to assert the mythological status of mathematics, and set it in opposition to critique. However, it is vulnerable on several fronts. As Pasquale makes clear, legislation – language in action – can begin the demystification; proprietary claims are mundane imitations of the old Pythagorean illusions; outside of political pressure and legislation, there is little incentive for companies to open their algorithms to auditing. However, once pried open by legislation, the wizard behind the curtain and the Automated Turk show their hand. With transparency comes another opportunity: demythologizing technologies that fetishize the re-presentational nature of mathematics.

    _____

    Chris Gilliard’s scholarship concentrates on privacy, institutional tech policy, digital redlining, and the re-inventions of discriminatory practices through data mining and algorithmic decision-making, especially as these apply to college students.

    Hugh Culik teaches at Macomb Community College. His work examines the convergence of systematic languages (mathematics and neurology) in Samuel Beckett’s fiction.

    Back to the essay

    _____

    Notes

    [1] Rotman’s work along with Amir Alexander’s cultural history of the calculus (2014) and Rebecca Goldstein’s (2006) placement of Gödel’s theorems in the historical context of mathematics’ conceptual struggle with the consistency and completeness of systems exemplify the movement to historicize mathematics. Alexander and Rotman are mathematicians, and Goldstein is a logician.

    [2] Other mathematical concepts have hypericonic status. For example, triangulation serves psychology as a metaphor for a family structure that pits two members against a third. Politicians “triangulate” their “position” relative to competing viewpoints. But because triangulation works in only two dimensions, it produces gross oversimplifications in other contexts. Nora Culik (pers. comm.) notes that a better metaphor would be multilateration, a measurement of the time difference between the arrival of a signal with at least two known points and another one that is unknown, to generate possible locations; these take the shape of a hyperboloid, a metaphor that allows for uncertainty in understanding multiply determined concepts. Both re-present an object’s position, but each carries implicit ideas of space.

    [3] Faith in the representational power of mathematics is central to hedge funds. Bridgewater Associates, a fund that manages more than $150 billion US, is at work building a piece of software to automate the staffing for strategic planning. The software seeks to model the cognitive structure of founder Raymond Dalio, and is meant to perpetuate his mind beyond his death. Dalio variously refers to the project as “The Book of the Future,” “The One Thing,” and “The Principles Operating System.” The project has drawn the enthusiastic attention of many popular publications such as The Wall Street Journal, Forbes, Wired, Bloomberg, and Fortune. The project’s model seems to operate on two levels: first, as a representation of Dalio’s mind, and second a representation of the dynamics of investing.

    [4] Numbers are divided into categories that grow in complexity. The development of numbers is an index to the development of the field (Kline, Mathematical Thought, 1972). For a careful study of the problematic status of zero, see Brian Rotman, Signifying Nothing: The Semiotics of Zero (1987). Amir Aczel, Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers (2015) offers a narrative of the historical origins of number.

    [5] Eugene Wigner (1959) asserts an ambiguous claim for a mathematizable universe. Responses include Max Tegmark’s “The Mathematical Universe” (2008) which sees the question as imbricated in a variety of computational, mathematical, and physical systems.

    [6] The anxiety of representation characterizes the shift from the literary moderns to the postmodern. For example, Samuel Beckett’s intense interest in mathematics and his strategies – literalization and cancellation – typify the literary responses to this anxiety. In his first published novel, Murphy (1938), one character mentions “Hypasos the Akousmatic, drowned in a mud puddle . . . for having divulged the incommensurability of side and diagonal” (46). Beckett uses detailed references to Descartes, Geulcinx, Gödel, and 17th Century mathematicians such as John Craig to literalize the representational limits of formal systems of knowledge. Andrew Gibson’s Beckett and Badiou provides a nuanced assessment of the mathematics, literature, and culture (2006) in Beckett’s work.

    [7] See Frank Kermode, The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue (2000) for an overview of the apocalyptic tradition in Western culture and the totalistic responses it evokes in politics. While mathematics dealt with indeterminacy, incompleteness, inconsistency and failure, the political world simultaneously saw a countervailing regressive collapse: Mein Kampf in 1925, the Soviet Gulag in 1934; Hitler’s election as Chancellor of Germany in 1933; the fascist bent of Ezra Pound, T. S. Eliot’s After Strange Gods, and D. H. Lawrence’s Mexican fantasies suggest the anxiety of re-presentation that gripped the culture.

    [8] Davis and Hersh (21) divide probability theory into three aspects: 1) theory, which has the same status as any other branch of mathematics; 2) applied theory that is connected to experimentation’s descriptive goals; and 3) applied probability for practical decisions and actions.

    [9] For primary documents, see Jean Van Heijenoort, From Frege to Gödel: a Source Book in Mathematical Logic, 1879-1931 (1967). Ernest Nagel and James Newman, Gödel’s Proof (1958) explains the steps of Gödel’s proofs and carefully restricts their metaphoric meanings; Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid [A Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll] (1980) places the work in the conceptual history that now leads to the possibility of artificial intelligence.

    [10] See Richard Nash, John Craige’s Mathematical Principles of Christian Theology. (1991) for a discussion of the 17th Century mathematician and theologian who attempted to calculate the rate of decline of faith in the Gospels so that he would know the date of the Apocalypse. His contributions to calculus and statistics emerge in a context we find absurd, even if his friend, Isaac Newton, found them valuable.

    [11] An equally foundational problem – the mathematics of infinity – occupies a similar position to the questions addressed by Gödel. Cantor’s opening of set theory exposes and solves the problems it poses to formal mathematics.

    [12] For the historical appearances of the masculine version of this anxiety, see Dennis Todd’s Imagining Monsters: Miscreations of the Self in Eighteenth Century England (1995).

    _____

    Works Cited

    • Aczel, Amir. 2015. Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers. New York: St. Martin’s Griffin.
    • Alexander, Amir. 2014. Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. New York: Macmillan.
    • Anderson, Chris. 2008. “The End of Theory.” Wired Magazine 16, no. 7: 16-07.
    • Beckett, Samuel. 1957. Murphy (1938). New York: Grove.
    • Berk, Richard. 2011. “Q&A with Richard Berk.” Interview by Greg Johnson. PennCurrent (Dec 15).
    • Blanchette, Patricia. 2014. “The Frege-Hilbert Controversy.” In Edward N. Zalta, ed., The Stanford Encyclopedia of Philosophy.
    • boyd, danah, and Crawford, Kate. 2012. “Critical Questions for Big Data.” Information, Communication & Society 15:5. doi 10.1080/1369118X.2012.678878.
    • de Certeau, Michel. 1988. The Practice of Everyday Life. Translated by Steven Rendall. Berkeley: University of California Press.
    • Davis, Philip and Reuben Hersh. 1981. Descartes’ Dream: The World According to Mathematics. Boston: Houghton Mifflin.
    • Desrosières, Alain, and Camille Naish. 2002. The Politics of Large Numbers: A History of Statistical Reasoning. Cambridge: Harvard University Press.
    • Eagle, Christopher. 2007. “‘Thou Serpent That Name Best’: On Adamic Language and Obscurity in Paradise Lost.” Milton Quarterly 41:3. 183-194.
    • Fenichel, Otto. 1945. The Psychoanalytic Theory of Neurosis. New York: W. W. Norton & Company.
    • Gibson, Andrew. 2006. Beckett and Badiou: The Pathos of Intermittency. New York: Oxford University Press.
    • Goldstein, Rebecca. 2006. Incompleteness: The Proof and Paradox of Kurt Gödel. New York: W.W. Norton & Company.
    • Guthrie, William Keith Chambers. 1962. A History of Greek Philosophy: Vol.1 The Earlier Presocratics and the Pythagoreans. Cambridge: Cambridge University Press.
    • Hofstadter, Douglas. 1979. Gödel, Escher, Bach: An Eternal Golden Braid; [a Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll]. New York: Basic Books.
    • Kermode, Frank. 2000. The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue. New York: Oxford University Press.
    • Kline, Morris. 1990. Mathematics: The Loss of Certainty. New York: Oxford University Press.
    • Latour, Bruno. 1993. We Have Never Been Modern. Translated by Catherine Porter. Cambridge: Harvard University Press.
    • Mitchell, W. J. T. 1995. Picture Theory: Essays on Verbal and Visual Representation. Chicago: University of Chicago Press.
    • Nagel, Ernest and James Newman. 1958. Gödel’s Proof. New York: New York University Press.
    • Office of Educational Technology at the US Department of Education. “Jose Ferreria: Knewton – Education Datapalooza”. Filmed [November 2012]. YouTube video, 9:47. Posted [November 2012]. https://youtube.com/watch?v=Lr7Z7ysDluQ.
    • O’Neil, Cathy. 2016. Weapons of Math Destruction. New York: Crown.
    • Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press.
    • Pigliucci, Massimo. 2009. “The End of Theory in Science?”. EMBO Reports 10, no. 6.
    • Porter, Theodore. 1986. The Rise of Statistical Thinking, 1820-1900. Princeton: Princeton University Press.
    • Rotman, Brian. 1987. Signifying Nothing: The Semiotics of Zero. Stanford: Stanford University Press
    • Rotman, Brian. 2000. Mathematics as Sign: Writing, Imagining, Counting. Stanford: Stanford University Press.
    • Tegmark, Max. 2008. “The Mathematical Universe.” Foundations of Physics 38 no. 2: 101-150.
    • Todd, Dennis. 1995. Imagining Monsters: Miscreations of the Self in Eighteenth Century England. Chicago: University of Chicago Press.
    • Turchin, Valentin. 1977. The Phenomenon of Science. New York: Columbia University Press.
    • Van Heijenoort, Jean. 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Vol. 9. Cambridge: Harvard University Press.
    • Wigner, Eugene P. 1959. “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Richard Courant Lecture in Mathematical Sciences delivered at New York University, May 11. Reprinted in Communications on Pure and Applied Mathematics 13:1 (1960). 1-14.
    • Wu, Xiaolin, and Xi Zhang. 2016. “Automated Inference on Criminality using Face Images.” arXiv preprint: 1611.04135.
  • Richard Hill – Review of Bauer and Latzer, Handbook on the Economics of the Internet

    Richard Hill – Review of Bauer and Latzer, Handbook on the Economics of the Internet

    a review of Johannes M. Bauer and Michal Latzer, eds., Handbook on the Economics of the Internet (Edward Elgar, 2016)

    by Richard Hill

    ~

    The editors of this book must be commended for having undertaken the task of producing it: it must surely have taken tremendous persistence and patience to assemble the broad range of chapters.  The result is a valuable book is valuable, even if at some parts are disappointing.  As is often the case for a compilation of articles written by different authors, the quality of the individual contributions is uneven: some are excellent, others not.  The book is valuable because it identifies many of the key issues regarding the economics of the Internet, but it is somewhat disappointing because some of the topics are not covered in sufficient depth and because some key topics are not covered at all.  For example, the digital divide is mentioned cursorily on pp. 6-7 of the hardback edition and there is no discussion of its historical origins, economic causes, future evolution, etc.

    Yet there is extensive literature on the digital divide, such as easily available overall ITU reports from 2016 and 2017, or more detailed ITU regional studies regarding international Internet interconnectivity for Africa and Latin America.  The historical impact of the abolition of the traditional telephony account settlement scheme is covered summarily in Chapter 2 of my book The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (2013).  One might have expected that a book dedicated to the economics of the Internet would have started from that event and explained its consequences, and analyse proposals regarding how to address the digital divide, for example the proposals made during the World Summit on the Information Society to create some kind of fund to bridge the gap (those proposals were not accepted).  I would have expected such a book to discuss the possibilities and the ramifications of an international version of the universal service funds that are used in many countries to minimize national digital divides between low-density rural areas and high-density cities.  But there is no discussion at all of these topics in the book.

    And there is little discussion of Artificial Intelligence (some of which is enabled by data obtained through the Internet) or of the disruption of labour markets that some believe is or will be caused by the Internet.  For a summary treatment of these topics, with extensive references, see sections 1 and 8 of my submission to the Working Group on Enhanced Cooperation.

    The Introduction of the book correctly notes that “Scale economies, interdependencies, and abundance are pervasive [in the Internet] and call for analytical concepts that augment the traditional approaches” (p. 3).  Yet, the book fails, on the whole, to deliver sufficient detail regarding such analytical concepts, an exception being the excellent discussion on pp. 297-308 of the Internet’s economic environment for innovation, in particular pp. 301-303.

    Of the 569 pages of text (in the hardcover edition), only 22 or so contain quantitative charts or tables (eight are in one chapter), and of those only 12 or so are original research.  Only one page has equations.  Of course the paucity of data in the book is due to the fact that data regarding the Internet is hard to obtain: in today’s privatized environment, companies strive to collect data, but not to publish it.  But economics is supposed to be a quantitative discipline, at least in part, so it would have been valuable if the book had included a chapter on the reasons for the relative paucity of reliable data (both micro and macro) concerning the Internet and the myriad of transactions that take place on the Internet.

    In a nutshell, the book gives good overall, comprehensive, and legible, descriptions of many trees, but in some cases without sufficient quantitative detail, whereas it mostly fails to provide an analysis of the forest comprised by the trees (except for the brilliant chapter by Eli Noam titled “From the Internet of Science to the Internet of Entertainment”).

    The book will be very valuable for people who know little or nothing about the Internet and its economics.  Those who know something will benefit from the extensive references given at the end of each chapter.  Those who know specific topics well will not learn much from this book.  A more appropriate title for the book would have been “A Comprehensive Introduction to the Economics of the Internet”.

    The rest of this review consists of brief reviews of each of the chapters of the book.  We start with the strongest chapter, followed by the weakest chapter, then review the other chapters in the order in which they appear in the book.

    1. From the Internet of Science to the Internet of Entertainment

    This chapter is truly excellent, as one would expect, given that it is written by Eli Noam.  It captures succinctly the key policy questions regarding the economics of the Internet.  We cite p. 564:

    • How to assure the financial viability of infrastructure?
    • Market power in the entertainment Internet?
    • Does vertical integration impede competition?
    • How to protect children, old people, and traditional morality?
    • How to protect privacy and security?
    • What is the impact on trade? What is the impact of globalization?
    • How to assure the interoperability of clouds?

    It is a pity that the book did not use those questions as key themes to be addressed in each chapter.  And it is a pity that the book did not address the industrial economics issues so well put forward.  We cite p. 565:

    Another economic research question is how to assure the financial viability of the infrastructure.  The financial balance between infrastructure, services, and users is a critical issue.  The infrastructure is expensive and wants to be paid.  Some of the media services are young and want to be left to grow.  Users want to be served generously with free content and low-priced, flat-rate data service.  Fundamental economics of competition push towards price deflation, but market power, and maybe regulation, pull in another direction.  Developing countries want to see money from communications as they did in the days of traditional telecom.

    Surely the other chapters of the book could have addressed these issues, which are being discussed publicly, see for example section 4 of the Summary of the 2017 ITU Open Consultation on so-called Over-the-Top (OTT) services.

    Noam’s discussion of the forces that are leading to fragmentation (pp. 558-560) is excellent.  He does not cite Mueller’s recent book on the topic, no doubt because this chapter of the book was written before Mueller’s book was published.  Muller’s book focuses on state actions, whereas Noam gives a convincing account of the economic drivers of fragmentation, and how such increased diversity may not actually be a negative development.

    Some minor quibbles: Noam does not discuss the economic impact of adult entertainment, yet it is no doubt significant.  The off-hand remark at the bottom of p. 557 to the effect that unleashing demand for entertainment might solve the digital divide is likely not well taken, and in any case would have to be justified by much more data.

    1. The Economics of Internet Standards

    I found this to be the weakest chapter in the book.  To begin with, it is mostly descriptive and contains hardly any real economic analysis.  The account of the Cisco/Huawei battle over MPLS-TP standards (pp. 219-222) is accurate, but it would have been nice to know what the economic drivers were of that battle, e.g. size of the market, respective market shares, values of the respective products based on the respective standards, who stood to gain/lose what (and not just the manufacturers, but also the network operators), etc.

    But the descriptive part is also weak.  For example, the Introduction gives the misleading impression that IETF standards are the dominant element in the growth of the Internet, whereas it was the World Wide Web Consortium’s (W3C) HTML and successor standards that enabled the web and most of what we consider to be the Internet today.  The history on p. 213 omits contributions from other projects such as Open Systems Interconnection (OSI) and CYCLADES.

    Since the book is about economics, surely it should have mentioned on pp. 214 and 217 how the IETF has become increasingly influenced by dominant manufacturers, see pp. 148-152 of Powers, Shawn M., and Jablonski, Michael (2015) The Real Cyberwar: The Political Economy of Internet Freedom; as Noam puts the matter on p. 559 of the book: “The [Internet] technical specifications are set by the Steering Group of the Internet Engineering Task Force (IETF), a small group of 15 engineers, almost all employees of big companies around the world.”

    And surely it should have discussed in section 10.4 (p. 214) the economic reasons that lead to greater adoption of TCP/IP over the competing OSI protocol, such as the lower implementation costs due to the lack of security of TCP/IP, the lack of non-ASCII support in the early IETF protocols, and the heavy subsidies provided by the US Defence Projects Research Agency (DARPA) and by the US National Science Foundation (NSF), which are well known facts recounted on pp. 533-541 of the book.  In addition to not dealing with economic issues, section 10.4 is an overly simplified account of what really happened.

    Section 10.7 (p. 222) is again, surprisingly devoid of any semblance of economic analysis.  Further, it perpetuates a self-serving, one-sided account of the 2012 World Conference on International Telecommunications (WCIT), without once citing scholarly writings on the issue, such as my book The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (2013).  The authors go so far as to cite the absurd US House proposition to the effect that the Internet should be “free of government control” without noting that what the US politicians meant is that it should be “free of foreign government control”, because of course the US has never had any intent of not subjecting the Internet to US laws and regulations.

    Indeed, at present, hardly anybody seriously questions the principle that offline law applies equally online.  One would expect a scholarly work to do better than to cite inane political slogans meant for domestic political purposes.  In particular when the citations are not used to underpin any semblance of economic analysis.

    1. The Economics of the Internet: An Overview

    This chapter provides a solid and thorough introduction to the basics of the economics of the Internet.

    1. The Industrial Organization of the Internet

    This chapter well presents the industrial organization of the Internet, that is, how the industry is structured economically, how its components interact economically, and how that is different from other economic sectors.  As the authors correctly state (p. 24): “ … the tight combination of high fixed and low incremental cost, the pervasive presence of increasing returns, the rapidity and frequency of entry and exit, high rates of innovation, and economies of scale in consumption (positive network externalities) have created unique economic conditions …”.  The chapter explains well key features such as multi-sided markets (p. 31).  And it correctly points out (p. 25) that “while there is considerable evidence that technologically dynamic industries flourish in the absence of government intervention, there is also evidence of the complementarity of public policy and the performance of high-tech markets.”  That is explored in pp. 45 ff. and in subsequent chapters, albeit not always in great detail.

    1. The Internet as a Complex Layered System

    This is an excellent chapter, one of the best in the book.  It explains how, because of the layered nature of the Internet, simple economic theories fail to capture its complexities.  As the chapter says (p. 68), the Internet is best viewed as a general purpose infrastructure.

    1. A Network Science Approach to the Internet

    This chapter provides a sound and comprehensive description of the Internet as a network, but it does not go beyond the description to provide analyses, for example regarding regulatory issues.  However, the numerous citations in the chapter do provide such analyses.

    1. Peer Production and Cooperation

    This chapter is also one of the best chapters in the book.  It provides an excellent description of how value is produced on the Internet, through decentralization, diverse motivations, and separation of governance and management.  It covers, and explains the differences between, peer production, crowd-sourcing, collaborative innovation, etc.  On p. 87 it provides an excellent quantitative description and analysis of specific key industry segments.  The key governance patterns in peer production are very well summarized on pp. 108-109 and 112-113.

    1. The Internet and Productivity

    This chapter actually contains a significant amount of quantitative data (which is not the case for most of the other chapters) and provides what I would consider to be an economic analysis of the issue, namely whether, and if so how, the Internet has contributed to productivity.  As the chapter points out, we lack sufficient data to analyse fully the impacts of the development of information and communication technologies since 2000, but this chapter does make an excellent contribution to that analysis.

    1. Cultural Economics and the Internet

    This is a good introduction to supply, demand, and markets for creative goods and services produced and/or distributed via the Internet.  The discussion of two-sided markets on p. 155 is excellent.  Unfortunately, however, the chapter is mostly a theoretical description: it does not refer to any actual data or provide any quantitative analysis of what is actually happening.

    1. A Political Economy Approach to the Internet

    This is another excellent chapters, one of the best in the book.  I noted one missing citation to a previous analysis of key issues from the political economics point of view: Powers, Shawn M., and Jablonski, Michael (2015) The Real Cyberwar: The Political Economy of Internet Freedom.  But the key issues are well discussed in the chapter:

    • The general trend towards monopolies and oligopolies of corporate ownership and control affecting the full range of Internet use and development (p. 164).
    • The specific role of Western countries and their militaries in supporting and directing specific trajectories (p. 165).
    • How the general trend towards privatization made it difficult to develop the Internet as a public information utility (p. 169).
    • The impact on labour, in particular shifting work to users (p. 170).
    • The rise and dominance of the surveillance economy (where users become the product because their data is valuable) (p. 175).
    1. Competition and Anti-Trust in Internet Markets

    This chapter provides a very good overview of the competition and anti-trust issues related to the Internet, but it would have been improved if it had referred to the excellent discussion in Noam’s chapter “From the Internet of Science to the Internet of Entertainment.”  It would have been improved by referring to recent academic literature on the topic.  Nevertheless, the description of key online market characteristics, including that they are often two-sided, (p. 184) is excellent.  The description of the actual situation (including litigation) regarding search engines on p. 189 ff. is masterful: a superb example of the sort of real economic analysis that I would have liked to see in other chapters.

    The good discussion of network neutrality (p. 201) could have been improved by taking the next step and analysing the economic implications of considering whether the Internet infrastructure should be regulated as a public infrastructure and/or, for example, be subject to functional separation.

    1. The Economics of Copyright and the Internet

    This is an excellent introduction to the issues relating to copyright in the digital age.  It provides little data but that is because, as noted on pp. 238-241, there is a paucity of data for copyright, whereas there is more for patents.

    1. The Economics of Privacy, Data Protection and Surveillance

    As one would expect from its author, Ian Brown, this is an excellent discussion of the issues and, again, one of the best chapters in the book.  In particular, the chapter explains well and clearly (pp. 250 ff.) why market failures (e.g externalities, information asymmetries and anti-competitive market structures) might justify regulation (such as the European data privacy rules).

    1. Economics of Cybersecurity

    This chapter provides a very good overview of the economic issues related to cybersecurity, but, like most of the other chapters, it provides very little data and thus no detailed economic analysis.  It would have benefited from referring to the Internet Society’s 2016 Global Internet Report, which does provide data, and stresses the key market failures that result in the current lack of security of the Internet: information asymmetries (section 13.7.2 of the book) and externalities (section 13.7.3).

    However, the section on externalities fails to mention certain possible solutions, such as minimum security standards.  Minimum safety standards are imposed on many products, such as electrical appliances, automobiles, airplanes, pharmaceuticals, etc.  Thus it would have been appropriate for the book to discuss the economic implications of minimum security standards.  And also the economic implications of Microsoft’s recent call for a so-called Geneva Digital Convention.

    1. Internet Architecture and Innovation in Applications

    This chapter provides a very good description, but it suffers from considering the Internet in isolation, without comparing it to other networks, in particular the fixed and mobile telephone networks.  It would have been good to see a discussion and comparison of the economic drivers of innovation or lack of innovation in the two networks.  And also a discussion of the economic role of the telephony signalling network, Signalling System Seven (SS7) which enabled implementation of the widely used, and economically important, Short Messaging Service (SMS).

    In that context, it is important to note that SS7 is, as is the Internet, a connectionless packet-switched system.  So what distinguishes the two networks is more than technology: indeed, economic factors (such as how services are priced for end-users, interconnection regimes, etc.) surely play a role, and it would have been good if those had been explored.  In this context, see my paper “The Internet, its governance, and the multi-Stakeholder model”, Info, vol. 16. no. 2, March 2014.

    1. Organizational Innovations, ICTs and Knowledge Governance: The Case of Platforms

    As this excellent chapter, one of the best in the books, correctly notes, “platforms constitute a major organizational innovation” which has been “made possible by technological innovation”.

    As explained on pp. 338-339, platforms are one of the key components of the Internet economy, and this has recently been recognized by governments.  For example, the Legal Affairs Committee of the European Parliament adopted an Opinion in May 2017 that, among other provisions:

    Calls for an appropriate and proportionate regulatory framework that would guarantee responsibility, fairness, trust and transparency in platforms’ processes in order to avoid discrimination and arbitrariness towards business partners, consumers, users and workers in relation to, inter alia, access to the service, appropriate and fair referencing, search results, or the functioning of relevant application programming interfaces, on the basis of interoperability and compliance principles applicable to platforms.

    The topic is covered to some extent a European Parliament Committee Report on online platforms and the digital single market, (2016/2276(INI).  And by some provisions in French law.  Detailed references to the cited documents, and to other material relevant to platforms, are found in section 9 of my submission to the Working Group on Enhanced Cooperation.

    1. Interconnection in the Internet: Peering, Interoperability and Content Delivery

    This chapter provides a very good description of Internet interconnection, including a good discussion of the basic economic issues.  As do the other chapters, it suffers from a paucity of data, and does not discuss whether the current interconnection regime is working well, or whether it is facing economic issues.  The chapter does point out (p. 357) that “information about actual interconnection agreements … may help to understand how interconnection markets are changing …”, but fails to discuss how the unique barter structure of Internet interconnections, most of which are informal, zero-cost traffic sharing agreements, impedes the collection and publication of such information.

    The discussion on p. 346 would have benefited from an economic analysis of the advantages/disadvantages of considering the basic Internet infrastructure to be a basic public infrastructure (such as roads, water and electrical power distribution systems, etc.) and the economic tradeoffs of regulating its interconnection.

    Section 16.5.1 would have benefited from a discussion of the economic drivers behind the discussions in ITU that lead to the adoption of ITU-T Recommendation D.50 and its Supplements, and the economic issues arguing for and against implementation of the provisions of that Recommendation.

    1. Internet Business Strategies

    As this very good chapter explains, the Internet has had a dramatic impact on all types of businesses, and has given rise to “platformization”, that is the use of platforms (see chapter 15 above) to conduct business.  Platforms benefit from network externalities and enable two-sided markets.  The chapter includes a detailed analysis (pp. 370-372) of the strategic properties of the Internet that can be used to facilitate and transform business, such as scalability, ubiquity, externalities, etc.  It also notes that the Internet has changed the role of customers and both reduced and increased information asymmetries.  The chapter provides a very good taxonomy of Internet business models (pp. 372 ff.).

    1. The Economics of Internet Search

    The chapter contains a good history of search engines, and an excellent analysis of advertising linked to searches.  It provides theoretical models and explains the important of two-sided markets in this context.  As the chapter correctly notes, additional research will require access to more data than are currently available.

    1. The Economics of Algorithmic Selection on the Internet

    As this chapter correctly notes (p. 395), “algorithms have come to shape our daily lives and realities.”  They have significant economic implication and raise “significant social risks such as manipulation and data bias, threats to privacy and violations of intellectual property rights”.  A good description of different types of algorithms and how they are used is given on p. 399.  Scale effects and concentration are discussed (p. 408) and the social risks are explained in detail on pp. 411 ff.:

    • Threats to basic rights and liberties.
    • Impacts on the mediation of reality.
    • Challenges to the future development of the human species.

    More specifically:

    • Manipulation
    • Diminishing variety
    • Constraints on freedom of expression
    • Threats to data protection and privacy
    • Social discrimination
    • Violation of intellectual property rights
    • Possible adaptations of the human brain
    • Uncertain effects on humans

    In this context, see also the numerous references in section 1 of my submission to the Working Group on Enhanced Cooperation.

    The chapter includes a good discussion of different governance models and their advantages/disadvantages, namely:

    • Laissez-fair markets
    • Self-organization by business
    • Self-regulation by industry
    • State regulation
    1. Online Advertising Economics

    This chapter provides a good history of what some have referred to as the Internet’s original sin, namely the advent of online advertising as the main revenue source for many Internet businesses.  It explains how the Internet can, and does, improve the efficiency of advertising by targeting (pp. 430 ff.) and it includes a detailed analysis of advertising in relation to search engines (pp. 435 ff.).

    1. Online News

    As the chapter correctly notes, this is an evolving area, so the chapter mostly consists of a narrative history.  The chapter’s conclusion starts by saying that “the Internet has brought growth and dynamism to the news industry”, but goes on to note, correctly, that “the financial outlook for news providers, old or new, is bleak” and that, thus far, nobody has found a viable business model to fund the online news business.  It is a pity that this chapter does not cite McChesney’s detailed analysis of this issue and discuss his suggestions for addressing it.

    1. The Economics of Online Video Entertainment

    This chapter provides the history of that segment of the Internet industry and includes a valuable comparison and analysis of the differences between online and offline entertainment media (pp. 462-464).

    1. Business Strategies and Revenue Models for Converged Video Services

    This chapter provides a clear and comprehensive description of how an effect of convergence “is the blurring of lines between formerly separated media platforms such as over-the-air broadcasting, cable TV, and streamed media.”  The chapter describes ten strategies and six revenue models that have been used to cope with these changes.

    1. The Economics of Virtual Worlds

    This chapter provides a good historical account of the evolution of the internal reward system of games, which went from virtual objects that players could obtain by solving puzzles (or whatever) to virtual money that could be acquired only within the game, to virtual money that could be acquired with real-world money, to large professional factories that produce and sell objects to World of Wonders players in exchange for real-world money.  The chapter explores the legal and economic issues arising out of these situations (pp. 503-504) and gives a good overview of the research in virtual economies.

    1. Economics of Big Data

    This chapter correctly notes (p. 512) that big data is “a field with more questions than answers”.  Thus, logically, the chapter is mostly descriptive.  It includes a good account of two-sided markets (p. 519), and correctly notes (p. 521) that “data governance should not be construed merely as an economic matter but that it should also encompass a social perspective”, a position with which I wholeheartedly agree.  As the chapter says (p. 522), “there are some areas affected by big data where public policies and regulations do exist”, in particular regarding:

    • Privacy
    • Data ownership
    • Open data

    As the chapter says (p. 522), most evidence available today suggests that markets are not “responding rapidly to concerns of users about the (mis)use of their personal information”.  For additional discussion, with extensive references, see section 1 of my submission to the Working Group on Enhanced Cooperation.

    1. The Evolution of the Internet: A Socioeconomic Account

    This is a very weak chapter.  Its opening paragraph fails to consider the historical context of the development of the Internet, or its consequences.  Its second paragraph fails to consider the overt influence of the US government on the evolution of the Internet.  Section 26.3 fails to cite one of the most comprehensive works on the topic (the relation between AT&T and the development of the internet), namely Schiller, Dan (2014) Digital Depression: Information Technology and Information Crisis, University of Illinois Press.  The discussion on p. 536 fails to even mention the Open Systems Interconnection (OSI) initiative, yet that initiative undoubtedly affected the development of the Internet, not just by providing a model for how not to do things (too complex, too slow), but also by providing some basic technology that is still used to this day, such as X.509 certificates.

    Section 26.6, on how market forces affect the Internet, seems oblivious to the rising evidence that dominant market power, not competition, is shaping the future of the Internet, which appears surprising in light of the good chapter in the book on that very topic: “Competition and anti-trust in Internet markets.”  Page 547 appears to ignore the rising vertical integration of many Internet services, even though that trend is well discussed in Noam’s excellent chapter “From the Internet of Science to the Internet of Entertainment.”

    The discussion of the role of government on p. 548 is surprisingly lacunary, given the rich literature on the topic in general, and specific government actions or proposed actions regarding topics such as freedom of speech, privacy, data protection, encryption, security, etc. (see for example my submission to the Working Group on Enhanced Cooperation).

    This chapter should have started with the observation that the Internet was not conceived as a public network (p. 558) and build on that observation, explaining the socioeconomic factors that shaped its transformation from a closed military/academic network into a public network and into a basic infrastructure that now underpins most economic activities.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • The Human Condition and The Black Box Society

    The Human Condition and The Black Box Society

    Frank Pasquale, The Black Box Society (Harvard University Press, 2015)a review of Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
    by Nicole Dewandre
    ~

    1. Introduction

    This review is informed by its author’s specific standpoint: first, a lifelong experience in a policy-making environment, i.e. the European Commission; and, second, a passion for the work of Hannah Arendt and the conviction that she has a great deal to offer to politics and policy-making in this emerging hyperconnected era. As advisor for societal issues at DG Connect, the department of the European Commission in charge of ICT policy at EU level, I have had the privilege of convening the Onlife Initiative, which explored the consequences of the changes brought about by the deployment of ICTs on the public space and on the expectations toward policy-making. This collective thought exercise, which took place in 2012-2013, was strongly inspired by Hannah Arendt’s 1958 book The Human Condition.

    This is the background against which I read the The Black Box Society: The Secret Algorithms Behind Money and Information by Frank Pasquale (references to which are indicated here parenthetically by page number). Two of the meanings of “black box“—a device that keeps track of everything during a flight, on the one hand, and the node of a system that prevents an observer from identifying the link(s) between input and output, on the other hand—serve as apt metaphors for today’s emerging Big Data environment.

    Pasquale digs deep into three sectors that are at the root of what he calls the black box society: reputation (how we are rated and ranked), search (how we use ratings and rankings to organize the world), and finance (money and its derivatives, whose flows depend crucially on forms of reputation and search). Algorithms and Big Data have permeated these three activities to a point where disconnection with human judgment or control can transmogrify them into blind zombies, opening new risks, affordances and opportunities. We are far from the ideal representation of algorithms as support for decision-making. In these three areas, decision-making has been taken over by algorithms, and there is no “invisible hand” ensuring that profit-driven corporate strategies will deliver fairness or improve the quality of life.

    The EU and the US contexts are both distinct and similar. In this review, I shall not comment on Pasquale’s specific policy recommendations in detail, even if as European, I appreciate the numerous references to European law and policy that Pasquale commends as good practices (ranging from digital competition law, to welfare state provision, to privacy policies). I shall instead comment from a meta-perspective, that of challenging the worldview that implicitly undergirds policy-making on both sides of the Atlantic.

    2. A Meta-perspective on The Black Box Society

    The meta-perspective as I see it is itself twofold: (i) we are stuck with Modern referential frameworks, which hinder our ability to attend to changing human needs, desires and expectations in this emerging hyperconnected era, and (ii) the personification of corporations in policymaking reveals shortcomings in the current representation of agents as interest-led beings.

    a) Game over for Modernity!

    As stated by the Onlife Initiative in its “Onlife Manifesto,” through its expression “Game over for Modernity?“, it is time for politics and policy-making to leave Modernity behind. That does not mean going back to the Middle Ages, as feared by some, but instead stepping firmly into this new era that is coming to us. I believe with Genevieve Bell and Paul Dourish that it is more effective to consider that we are now entering into the ubiquitous computing era instead of looking at it as if it was approaching fast.[1] With the miniaturisation of devices and sensors, with mobile access to broadband internet and with the generalized connectivity of objects as well as of people, not only do we witness an increase of the online world, but, more fundamentally, a collapse of the distinction between the online and the offline worlds, and therefore a radically new socio-technico-natural compound. We live in an environment which is increasingly reactive and talkative as a result of the intricate mix between off-line and online universes. Human interactions are also deeply affected by this new socio-technico-natural compound, as they are or will soon be “sticky”, i.e. leave a material trace by default and this for the first time in history. These new affordances and constraints destabilize profoundly our Modern conceptual frameworks, which rely on distinctions that are blurring, such as the one between the real and the virtual or the ones between humans, artefacts and nature, understood with mental categories dating back from the Enlightenment and before. The very expression “post-Modern” is not accurate anymore or is too shy, as it continues to position Modernity as its reference point. It is time to give a proper name to this new era we are stepping into, and hyperconnectivity may be such a name.

    Policy-making however continues to rely heavily on Modern conceptual frameworks, and this not only from the policy-makers’ point of view but more widely from all those engaging in the public debate. There are many structuring features of the Modern conceptual frameworks and it goes certainly beyond this review to address them thoroughly. However, when it comes to addressing the challenges described by The Black Box Society, it is important to mention the epistemological stance that has been spelled out brilliantly by Susan H. Williams in her Truth, Autonomy, and Speech: Feminist Theory and the First Amendment: “the connection forged in Cartesianism between knowledge and power”[2]. Before encountering Susan Williams’s work, I came to refer to this stance less elegantly with the expression “omniscience-omnipotence utopia”[3]. Williams writes that “this epistemological stance has come to be so widely accepted and so much a part of many of our social institutions that it is almost invisible to us” and that “as a result, lawyers and judges operate largely unself-consciously with this epistemology”[4]. To Williams’s “lawyers and judges”, we should add policy-makers and stakeholders.  This Cartesian epistemological stance grounds the conviction that the world can be elucidated in causal terms, that knowledge is about prediction and control, and that there is no limit to what men can achieve provided they have the will and the knowledge. In this Modern worldview, men are considered as rational subjects and their freedom is synonymous with control and autonomy. The fact that we have a limited lifetime and attention span is out of the picture as is the human’s inherent relationality. Issues are framed as if transparency and control is all that men need to make their own way.

    1) One-Way Mirror or Social Hypergravity?

    Frank Pasquale is well aware of and has contributed to the emerging critique of transparency and he states clearly that “transparency is not just an end in itself” (8). However, there are traces of the Modern reliance on transparency as regulative ideal in the Black Box Society. One of them is when he mobilizes the one-way mirror metaphor. He writes:

    We do not live in a peaceable kingdom of private walled gardens; the contemporary world more closely resembles a one-way mirror. Important corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make. (9)

    I refrain from considering the Big Data environment as an environment that “makes sense” on its own, provided someone has access to as much data as possible. In other words, the algorithms crawling the data can hardly be compared to a “super-spy” providing the data controller with an absolute knowledge.

    Another shortcoming of the one-way mirror metaphor is that the implicit corrective is a transparent pane of glass, so the watched can watch the watchers. This reliance on transparency is misleading. I prefer another metaphor that fits better, in my view: to characterise the Big Data environment in a hyperconnected conceptual framework. As alluded to earlier, in contradistinction to the previous centuries and even millennia, human interactions will, by default, be “sticky”, i.e. leave a trace. Evanescence of interactions, which used to be the default for millennia, will instead require active measures to be ensured. So, my metaphor for capturing the radicality and the scope of this change is a change of “social atmosphere” or “social gravity”, as it were. For centuries, we have slowly developed social skills, behaviors and regulations, i.e. a whole ecosystem, to strike a balance between accountability and freedom, in a world where “verba volant and scripta manent[5], i.e. where human interactions took place in an “atmosphere” with a 1g “social gravity”, where they were evanescent by default and where action had to be taken to register them. Now, with all interactions leaving a trace by default, and each of us going around with his, her or its digital shadow, we are drifting fast towards an era where the “social atmosphere” will be of heavier gravity, say “10g”. The challenge is huge and will require a lot of collective learning and adaptation to develop the literacy and regulatory frameworks that will recreate and sustain the balance between accountability and freedom for all agents, human and corporations.

    The heaviness of this new data density stands in-between or is orthogonal to the two phantasms of bright emancipatory promises of Big Data, on the one hand, or frightening fears of Big Brother, on the other hand. Because of this social hypergravity, we, individually and collectively, have indeed to be cautious about the use of Big Data, as we have to be cautious when handling dangerous or unknown substances. This heavier atmosphere, as it were, opens to increased possibilities of hurting others, notably through harassment, bullying and false rumors. The advent of Big Data does not, by itself, provide a “license to fool” nor does it free agents from the need to behave and avoid harming others. Exploiting asymmetries and new affordances to fool or to hurt others is no more acceptable behavior as it was before the advent of Big Data. Hence, although from a different metaphorical standpoint, I support Pasquale’s recommendations to pay increased attention to the new ways the current and emergent practices relying on algorithms in reputation, search and finance may be harmful or misleading and deceptive.

    2) The Politics of Transparency or the Exhaustive Labor of Watchdogging?

    Another “leftover” of the Modern conceptual framework that surfaces in The Black Box Society is the reliance on watchdogging for ensuring proper behavior by corporate agents. Relying on watchdogging for ensuring proper behavior nurtures the idea that it is all right to behave badly, as long as one is not seen doing do. This reinforces the idea that the qualification of an act depends from it being unveiled or not, as if as long as it goes unnoticed, it is all right. This puts the entire burden on the watchers and no burden whatsoever on the doers. It positions a sort of symbolic face-to-face between supposed mindless firms, who are enabled to pursue their careless strategies as long as they are not put under the light and people who are expected to spend all their time, attention and energy raising indignation against wrong behaviors. Far from empowering the watchers, this framing enslaves them to waste time monitoring actors who should be acting in much better ways already. Indeed, if unacceptable behavior is unveiled, it raises outrage, but outrage is far from bringing a solution per se. If, instead, proper behaviors are witnessed, then the watchers are bound to praise the doers. In both cases, watchers are stuck in a passive, reactive and specular posture, while all the glory or the shame is on the side of the doers. I don’t deny the need to have watchers, but I warn against the temptation of relying excessively on the divide between doers and watchers to police behaviors, without engaging collectively in the formulation of what proper and inappropriate behaviors are. And there is no ready-made consensus about this, so that it requires informed exchange of views and hard collective work. As Pasquale explains in an interview where he defends interpretative approaches to social sciences against quantitative ones:

    Interpretive social scientists try to explain events as a text to be clarified, debated, argued about. They do not aspire to model our understanding of people on our understanding of atoms or molecules. The human sciences are not natural sciences. Critical moral questions can’t be settled via quantification, however refined “cost benefit analysis” and other political calculi become. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. That’s a feature, not a bug, of the method: rather than trying to bury normative differences in jargon, it surfaces them.

    The excessive reliance on watchdogging enslaves the citizenry to serve as mere “watchdogs” of corporations and government, and prevents any constructive cooperation with corporations and governments. It drains citizens’ energy for pursuing their own goals and making their own positive contributions to the world, notably by engaging in the collective work required to outline, nurture and maintain the shaping of what accounts for appropriate behaviours.

    As a matter of fact, watchdogging would be nothing more than an exhausting laboring activity.

    b) The Personification of Corporations

    One of the red threads unifying The Black Box Society’s treatment of numerous technical subjects is unveiling the oddness of the comparative postures and status of corporations, on the one hand, and people, on the other hand. As nicely put by Pasquale, “corporate secrecy expands as the privacy of human beings contracts” (26), and, in the meantime, the divide between government and business is narrowing (206). Pasquale points also to the fact that at least since 2001, people have been routinely scrutinized by public agencies to deter the threatening ones from hurting others, while the threats caused by corporate wrongdoings in 2008 gave rise to much less attention and effort to hold corporations to account. He also notes that “at present, corporations and government have united to focus on the citizenry. But why not set government (and its contractors) to work on corporate wrongdoings?” (183) It is my view that these oddnesses go along with what I would call a “sensitive inversion”. Corporations, which are functional beings, are granted sensitivity as if they were human beings, in policy-making imaginaries and narratives, while men and women, who are sensitive beings, are approached in policy-making as if they were functional beings, i.e. consumers, job-holders, investors, bearer of fundamental rights, but never personae per se. The granting of sensitivity to corporations goes beyond the legal aspect of their personhood. It entails that corporations are the one whose so-called needs are taken care of by policy makers, and those who are really addressed to, qua persona. Policies are designed with business needs in mind, to foster their competitiveness or their “fitness”. People are only indirect or secondary beneficiaries of these policies.

    The inversion of sensitivity might not be a problem per se, if it opened pragmatically to an effective way to design and implement policies which bear indeed positive effects for men and women in the end. But Pasquale provides ample evidence showing that this is not the case, at least in the three sectors he has looked at more closely, and certainly not in finance.

    Pasquale’s critique of the hypostatization of corporations and reduction of humans has many theoretical antecedents. Looking at it from the perspective of Hannah Arendt’s The Human Condition illuminates the shortcomings and risks associated with considering corporations as agents in the public space and understanding the consequences of granting them sensitivity, or as it were, human rights. Action is the activity that flows from the fact that men and women are plural and interact with each other: “the human condition of action is plurality”.[6] Plurality is itself a ternary concept made of equality, uniqueness and relationality. First, equality as what we grant to each other when entering into a political relationship. Second, uniqueness refers to the fact that what makes each human a human qua human is precisely that who s/he is is unique. If we treat other humans as interchangeable entities or as characterised by their attributes or qualities, i.e., as a what, we do not treat them as human qua human, but as objects. Last and by no means least, the third component of plurality is the relational and dynamic nature of identity. For Arendt, the disclosure of the who “can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities”[7]. The who appears unmistakably to others, but remains somewhat hidden from the self. It is this relational and revelatory character of identity that confers to speech and action such a critical role and that articulates action with identity and freedom. Indeed, for entities for which the who is partly out of reach and matters, appearance in front of others, notably with speech and action, is a necessary condition of revealing that identity:

    Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: who are you? In acting and speaking, men show who they are, they appear. Revelatory quality of speech and action comes to the fore where people are with others and neither for, nor against them, that is in sheer togetherness.[8]

    So, in this sense, the public space is the arena where whos appear to other whos, personae to other personae.

    For Arendt, the essence of politics is freedom and is grounded in action, not in labour and work. The public space is where agents coexist and experience their plurality, i.e. the fact that they are equal, unique and relational. So, it is much more than the usual American pluralist (i.e., early Dahl-ian) conception of a space where agents worry for exclusively for their own needs by bargaining aggressively. In Arendt’s perspective, the public space is where agents, self-aware of their plural characteristic, interact with each other once their basic needs have been taken care of in the private sphere. As highlighted by Seyla Benhabib in The Reluctant Modernism of Hannah Arendt, “we not only owe to Hannah Arendt’s political philosophy the recovery of the public as a central category for all democratic-liberal politics; we are also indebted to her for the insight that the public and the private are interdependent”.[9] One could not appear in public if s/he or it did not have also a private place, notably to attend to his, her or its basic needs for existence. In Arendtian terms, interactions in the public space take place between agents who are beyond their satiety threshold. Acknowledging satiety is a precondition for engaging with others in a way that is not driven by one’s own interest, but rather by their desire to act together with others—”in sheer togetherness”—and be acknowledged as who they are. If an agent perceives him-, her- or itself and behave only as a profit-maximiser or as an interest-led being, i.e. if s/he or it has no sense of satiety and no self-awareness of the relational and revelatory character of his, her or its identity, then s/he or it cannot be a “who” or an agent in political terms, and therefore, respond of him-, her- or itself. It does simply not deserve -and therefore should not be granted- the status of a persona in the public space.

    It is easy to imagine that there can indeed be no freedom below satiety, and that “sheer togetherness” would just be impossible among agents below their satiety level or deprived from having one. This is however the situation we are in, symbolically, when we grant corporations the status of persona while considering efficient and appropriate that they care only for profit-maximisation. For a business, making profit is a condition to stay alive, as for humans, eating is a condition to stay alive. However, in the name of the need to compete on global markets, to foster growth and to provide jobs, policy-makers embrace and legitimize an approach to businesses as profit-maximisers, despite the fact this is a reductionist caricature of what is allowed by the legal framework on company law[10]. So, the condition for businesses to deserve the status of persona in the public space is, no less than for men and women, to attend their whoness and honour their identity, by staying away from behaving according to their narrowly defined interests. It means also to care for the world as much, if not more, as for themselves.

    This resonates meaningfully with the quotation from Heraclitus that serves as the epigraph for The Black Box Society: “There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own”. Reading Arendt with Heraclitus’s categories of sleep and wakefulness, one might consider that totalitarianism arises—or is not far away—when human beings are awake in private, but asleep in public, in the sense that they silence their humanness or that their humanness is silenced by others when appearing in public. In this perspective, the merging of markets and politics—as highlighted by Pasquale—could be seen as a generalized sleep in the public space of human beings and corporations, qua personae, while all awakened activities are taking place in the private, exclusively driven by their needs and interests.

    In other words—some might find a book like The Black Box Society, which offers a bold reform agenda for numerous agencies, to be too idealistic. But in my view, it falls short of being idealistic enough: there is a missing normative core to the proposals in the book, which can be corrected by democratic, political, and particularly Arendtian theory. If a populace has no acceptance of a certain level of goods and services prevailing as satiating its needs, and if it distorts the revelatory character of identity into an endless pursuit of a limitless growth, it cannot have the proper lens and approach to formulate what it takes to enable the fairness and fair play described in The Black Box Society.

    3. Stepping into Hyperconnectivity

    1) Agents as Relational Selves

    A central feature of the Modern conceptual framework underlying policymaking is the figure of the rational subject as political proxy of humanness. I claim that this is not effective anymore in ensuring a fair and flourishing life for men and women in this emerging hyperconnected era and that we should adopt instead the figure of a “relational self” as it emerges from the Arendtian concept of plurality.

    The concept of the rational subject was forged to erect Man over nature. Nowadays, the problem is not so much to distinguish men from nature, but rather to distinguish men—and women—from artefacts. Robots come close to humans and even outperform them, if we continue to define humans as rational subjects. The figure of the rational subject is torn apart between “truncated gods”—when Reason is considered as what brings eventually an overall lucidity—on the one hand, and “smart artefacts”—when reason is nothing more than logical steps or algorithms—on the other hand. Men and women are neither “Deep Blue” nor mere automatons. In between these two phantasms, the humanness of men and women is smashed. This is indeed what happens in the Kafkaesque and ridiculous situations where a thoughtless and mindless approach to Big Data is implemented, and this from both stance, as workers and as consumers. As far as the working environment is concerned, “call centers are the ultimate embodiment of the panoptic workspace. There, workers are monitored all the time” (35). Indeed, this type of overtly monitored working environment is nothing else that a materialisation of the panopticon. As consumers, we all see what Pasquale means when he writes that “far more [of us] don’t even try to engage, given the demoralizing experience of interacting with cyborgish amalgams of drop- down menus, phone trees, and call center staff”. In fact, this mindless use of automation is only the last version of the way we have been thinking for the last decades, i.e. that progress means rationalisation and de-humanisation across the board. The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human. So, staying stuck with the rational subject as a proxy for humanness, either ends up in smashing our humanness as workers and consumers and, at best, leads to absurd situations where to be free would mean spending all our time controlling we are not controlled.

    As a result, keeping the rational subject as the central representation of humanness will increasingly be misleading politically speaking. It fails to provide a compass for treating each other fairly and making appropriate decisions and judgments, in order to impacting positively and meaningfully on human lives.

    With her concept of plurality, Arendt offers an alternative to the rational subject for defining humanness: that of the relational self. The relational self, as it emerges from the Arendtian’s concept of plurality[11], is the man, woman or agent self-aware of his, her or its plurality, i.e. the facts that (i) he, she or it is equal to his, her or its fellows; (ii) she, he or it is unique as all other fellows are unique; and (iii) his, her or its identity as a revelatory character requiring to appear among others in order to reveal itself through speech and action. This figure of the relational self accounts for what is essential to protect politically in our humanness in a hyperconnected era, i.e. that we are truly interdependent from the mutual recognition that we grant to each other and that our humanity is precisely grounded in that mutual recognition, much more than in any “objective” difference or criteria that would allow an expert system to sort out human from non-human entities.

    The relational self, as arising from Arendt’s plurality, combines relationality and freedom. It resonates deeply with the vision proposed by Susan H. Williams, i.e. the relational model of truth and the narrative model to autonomy, in order to overcome the shortcomings of the Cartesian and liberal approaches to truth and autonomy without throwing the baby, i.e. the notion of agency and responsibility, out with the bathwater, as the social constructionist and feminist critique of the conceptions of truth and autonomy may be understood of doing.[12]

    Adopting the relational self as the canonical figure of humanness instead of the rational subject‘s one puts under the light the direct relationship between the quality of interactions, on the one hand, and the quality of life, on the other hand. In contradistinction with transparency and control, which are meant to empower non-relational individuals, relational selves are self-aware that they are in need of respect and fair treatment from others, instead. It also makes room for vulnerability, notably the vulnerability of our attentional spheres, and saturation, i.e. the fact that we have a limited attention span, and are far from making a “free choice” when clicking on “I have read and accept the Terms & Conditions”. Instead of transparency and control as policy ends in themselves, the quality of life of relational selves and the robustness of the world they construct together and that lies between them depend critically on being treated fairly and not being fooled.

    It is interesting to note that the word “trust” blooms in policy documents, showing that the consciousness of the fact that we rely from each other is building up. Referring to trust as if it needed to be built is however a signature of the fact that we are in transition from Modernity to hyperconnectivity, and not yet fully arrived. By approaching trust as something that can be materialized we look at it with Modern eyes. As “consent is the universal solvent” (35) of control, transparency-and-control is the universal solvent of trust. Indeed, we know that transparency and control nurture suspicion and distrust. And that is precisely why they have been adopted as Modern regulatory ideals. Arendt writes: “After this deception [that we were fooled by our senses], suspicions began to haunt Modern man from all sides”[13]. So, indeed, Modern conceptual frameworks rely heavily on suspicion, as a sort of transposition in the realm of human affairs of the systematic doubt approach to scientific enquiries. Frank Pasquale quotes moral philosopher Iris Murdoch for having said: “Man is a creature who makes pictures of himself and then comes to resemble the picture” (89). If she is right—and I am afraid she is—it is of utmost importance to shift away from picturing ourselves as rational subjects and embrace instead the figure of relational selves, if only to save the fact that trust can remain a general baseline in human affairs. Indeed, if it came true that trust can only be the outcome of a generalized suspicion, then indeed we would be lost.

    Besides grounding the notion of relational self, the Arendtian concept of plurality allows accounting for interactions among humans and among other plural agents, which are beyond fulfilling their basic needs (necessity) or achieving goals (instrumentality), and leads to the revelation of their identities while giving rise to unpredictable outcomes. As such, plurality enriches the basket of representations for interactions in policy making. It brings, as it were, a post-Modern –or should I dare saying a hyperconnected- view to interactions. The Modern conceptual basket for representations of interactions includes, as its central piece, causality. In Modern terms, the notion of equilibrium is approached through a mutual neutralization of forces, either with the invisible hand metaphor, or with Montesquieu’s division of powers. The Modern approach to interactions is either anchored into the representation of one pole being active or dominating (the subject) and the other pole being inert or dominated (nature, object, servant) or, else, anchored in the notion of conflicting interests or dilemmas. In this framework, the notion of equality is straightjacketed and cannot be embodied. As we have seen, this Modern straitjacket leads to approaching freedom with control and autonomy, constrained by the fact that Man is, unfortunately, not alone. Hence, in the Modern approach to humanness and freedom, plurality is a constraint, not a condition, while for relational selves, freedom is grounded in plurality.

    2) From Watchdogging to Accountability and Intelligibility

    If the quest for transparency and control is as illusory and worthless for relational selves, as it was instrumental for rational subjects, this does not mean that anything goes. Interactions among plural agents can only take place satisfactorily if basic and important conditions are met.  Relational selves are in high need of fairness towards themselves and accountability of others. Deception and humiliation[14] should certainly be avoided as basic conditions enabling decency in the public space.

    Once equipped with this concept of the relational self as the canonical figure of what can account for political agents, be they men, women, corporations and even States. In a hyperconnected era, one can indeed see clearly why the recommendations Pasquale offers in his final two chapters “Watching (and Improving) the Watchers” and “Towards an Intelligible Society,” are so important. Indeed, if watchdogging the watchers has been criticized earlier in this review as an exhausting laboring activity that does not deliver on accountability, improving the watchers goes beyond watchdogging and strives for a greater accountability. With regard to intelligibility, I think that it is indeed much more meaningful and relevant than transparency.

    Pasquale invites us to think carefully about regimes of disclosure, along three dimensions:  depth, scope and timing. He calls for fair data practices that could be enhanced by establishing forms of supervision, of the kind that have been established for checking on research practices involving human subjects. Pasquale suggests that each person is entitled to an explanation of the rationale for the decision concerning them and that they should have the ability to challenge that decision. He recommends immutable audit logs for holding spying activities to account. He calls also for regulatory measures compensating for the market failures arising from the fact that dominant platforms are natural monopolies. Given the importance of reputation and ranking and the dominance of Google, he argues that the First Amendment cannot be mobilized as a wild card absolving internet giants from accountability. He calls for a “CIA for finance” and a “Corporate NSA,” believing governments should devote more effort to chasing wrongdoings from corporate actors. He argues that the approach taken in the area of Health Fraud Enforcement could bear fruit in finance, search and reputation.

    What I appreciate in Pasquale’s call for intelligibility is that it does indeed calibrate the needs of relational selves to interact with each other, to make sound decisions and to orient themselves in the world. Intelligibility is different from omniscience-omnipotence. It is about making sense of the world, while keeping in mind that there are different ways to do so. Intelligibility connects relational selves to the world surrounding them and allows them to act with other and move around. In the last chapter, Pasquale mentions the importance of restoring trust and the need to nurture a public space in the hyperconnected era. He calls for an end game to the Black Box. I agree with him that conscious deception inherently dissolves plurality and the common world, and needs to be strongly combatted, but I think that a lot of what takes place today goes beyond that and is really new and unchartered territories and horizons for humankind. With plurality, we can also embrace contingency in a less dramatic way that we used to in the Modern era. Contingency is a positive approach to un-certainty. It accounts for the openness of the future. The very word un-certainty is built in such a manner that certainty is considered the ideal outcome.

    4. WWW, or Welcome to the World of Women or a World Welcoming Women[15]

    To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

    But this situation may be looked at more optimistically as an opportunity for women’s voices and thoughts to go mainstream and be listened to. Now that equality between women and men is enshrined in the political and legal systems of the EU and the US, concretely, women have been admitted to the status of “rational subject”, but that does not dissolve its masculine origin, and the oddness or uneasiness for women to embrace this figure. Indeed, it was forged by men with men in mind, women, for those men, being indexed on nature. Mainstreaming the figure of the relational self, born in the mind of Arendt, will be much more inspiring and empowering for women, than was the rational subject. In fact, this enhances their agency and the performativity of their thoughts and theories. So, are we heading towards a world welcoming women?

    In conclusion, the advent of Big Data can be looked at in two ways. The first one is to look at it as the endpoint of the materialisation of all the promises and fears of Modern times. The second one is to look at it as a wake-up call for a new beginning; indeed, by making obvious the absurdity or the price of going all the way down to the consequences of the Modern conceptual frameworks, it calls on thinking on new grounds about how to make sense of the human condition and make it thrive. The former makes humans redundant, is self-fulfilling and does not deserve human attention and energy. Without any hesitation, I opt for the latter, i.e. the wake-up call and the new beginning.

    Let’s engage in this hyperconnected era bearing in mind Virginia Woolf’s “Think we must”[16] and, thereby, shape and honour the human condition in the 21st century.
    _____

    Nicole Dewandre has academic degrees in engineering, economics and philosophy. She is a civil servant in the European Commission, since 1983. She was advisor to the President of the Commission, Jacques Delors, between 1986 and 1993. She then worked in the EU research policy, promoting gender equality, partnership with civil society and sustainability issues. Since 2011, she has worked on the societal issues related to the deployment of ICT technologies. She has published widely on organizational and political issues relating to ICTs.

    The views expressed in this article are the sole responsibility of the author and in no way represent the view of the European Commission and its services.

    Back to the essay
    _____

    Acknowledgments: This review has been made possible by the Faculty of Law of the University of Maryland in Baltimore, who hosted me as a visiting fellow for the month of September 2015. I am most grateful to Frank Pasquale, first for having written this book, but also for engaging with me so patiently over the month of September and paying so much attention to my arguments, even suggesting in some instances the best way for making my points, when I was diverging from his views. I would also like to thank Jérôme Kohn, director of the Hannah Arendt Center at the New School for Social Research, for his encouragements in pursuing the mobilisation of Hannah Arendt’s legacy in my professional environment. I am also indebted, and notably for the conclusion, to the inspiring conversations I have had with Shauna Dillavou, excecutive director of CommunityRED, and Soraya Chemaly, Washington-based feminist writer, critic and activist. Last, and surely not least, I would like to thank David Golumbia for welcoming this piece in his journal and for the care he has put in editing this text written by a non-English native speaker.

    [1] This change of perspective, in itself, has the interesting side effect to take the carpet under the feet of those “addicted to speed”, as Pasquale is right when he points to this addiction (195) as being one of the reasons “why so little is being done” to address the challenges arising from the hyperconnected era.

    [2] Williams, Truth, Autonomy, and Speech, New York: New York University Press, 2004 (35).

    [3] See, e.g., Nicole Dewandre, ‘Rethinking the Human Condition in a Hyperconnected Era: Why Freedom Is Not About Sovereignty But About Beginnings’, in The Onlife Manifesto, ed. Luciano Floridi, Springer International Publishing, 2015 (195–215).

    [4]Williams, Truth, Autonomy, and Speech (32).

    [5] Literally: “spoken words fly; written ones remain”

    [6] Apart from action, Arendt distinguishes two other fundamental human activities that together with action account for the vita activa. These two other activities are labour and work. Labour is the activity that men and women engage in to stay alive, as organic beings: “the human condition of labour is life itself”. Labour is totally pervaded by necessity and processes. Work is the type of activity men and women engage with to produce objects and inhabit the world: “the human condition of work is worldliness”. Work is pervaded by a means-to-end logic or an instrumental rationale.

    [7] Arendt, The Human Condition, 1958; reissued, University of Chicago Press, 1998 (159).

    [8] Arendt, The Human Condition (160).

    [9] Seyla Benhabib, The Reluctant Modernism of Hannah Arendt, Revised edition, Lanham, MD: Rowman & Littlefield Publishers, 2003, (211).

    [10] See notably the work of Lynn Stout and the Frank Bold Foundation’s project on the purpose of corporations.

    [11] This expression has been introduced in the Onlife Initiative by Charles Ess, but in a different perspective. The Ess’ relational self is grounded in pre-Modern and Eastern/oriental societies. He writes: “In “Western” societies, the affordances of what McLuhan and others call “electric media,” including contemporary ICTs, appear to foster a shift from the Modern Western emphases on the self as primarily rational, individual, and thereby an ethically autonomous moral agent towards greater (and classically “Eastern” and pre-Modern) emphases on the self as primarily emotive, and relational—i.e., as constituted exclusively in terms of one’s multiple relationships, beginning with the family and extending through the larger society and (super)natural orders”. Ess, in Floridi, ed.,  The Onlife Manifesto (98).

    [12] Williams, Truth, Autonomy, and Speech.

    [13] Hannah Arendt and Jerome Kohn, Between Past and Future, Revised edition, New York: Penguin Classics, 2006 (55).

    [14] See Richard Rorty, Contingency, Irony, and Solidarity, New York: Cambridge University Press, 1989.

    [15] I thank Shauna Dillavou for suggesting these alternate meanings for “WWW.”

    [16] Virginia Woolf, Three Guineas, New York: Harvest, 1966.

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay

  • Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Dataclysm: Who We Are (When We Think No One's Looking) (Crown, 2014)a review of Christian Rudder, Dataclysm: Who We Are (When We Think No One’s Looking) (Crown, 2014)
    by Cathy O’Neil
    ~
    Here’s what I’ve spent the last couple of days doing: alternatively reading Christian Rudder’s new book Dataclysm and proofreading a report by AAPOR which discusses the benefits, dangers, and ethics of using big data, which is mostly “found” data originally meant for some other purpose, as a replacement for public surveys, with their carefully constructed data collection processes and informed consent. The AAPOR folk have asked me to provide tangible examples of the dangers of using big data to infer things about public opinion, and I am tempted to simply ask them all to read Dataclysm as exhibit A.

    Rudder is a co-founder of OKCupid, an online dating site. His book mainly pertains to how people search for love and sex online, and how they represent themselves in their profiles.

    Here’s something that I will mention for context into his data explorations: Rudder likes to crudely provoke, as he displayed when he wrote this recent post explaining how OKCupid experiments on users. He enjoys playing the part of the somewhat creepy detective, peering into what OKCupid users thought was a somewhat private place to prepare themselves for the dating world. It’s the online equivalent of a video camera in a changing booth at a department store, which he defended not-so-subtly on a recent NPR show called On The Media, and which was written up here.

    I won’t dwell on that aspect of the story because I think it’s a good and timely conversation, and I’m glad the public is finally waking up to what I’ve known for years is going on. I’m actually happy Rudder is so nonchalant about it because there’s no pretense.

    Even so, I’m less happy with his actual data work. Let me tell you why I say that with a few examples.

    Who Are OKCupid Users?

    I spent a lot of time with my students this summer saying that a standalone number wouldn’t be interesting, that you have to compare that number to some baseline that people can understand. So if I told you how many black kids have been stopped and frisked this year in NYC, I’d also need to tell you how many black kids live in NYC for you to get an idea of the scope of the issue. It’s a basic fact about data analysis and reporting.

    When you’re dealing with populations on dating sites and you want to conclude things about the larger culture, the relevant “baseline comparison” is how well the members of the dating site represent the population as a whole. Rudder doesn’t do this. Instead he just says there are lots of OKCupid users for the first few chapters, and then later on after he’s made a few spectacularly broad statements, on page 104 he compares the users of OKCupid to the wider internet users, but not to the general population.

    It’s an inappropriate baseline, made too late. Because I’m not sure about you but I don’t have a keen sense of the population of internet users. I’m pretty sure very young kids and old people are not well represented, but that’s about it. My students would have known to compare a population to the census. It needs to happen.

    How Do You Collect Your Data?

    Let me back up to the very beginning of the book, where Rudder startles us by showing us that the men that women rate “most attractive” are about their age whereas the women that men rate “most attractive” are consistently 20 years old, no matter how old the men are.

    Actually, I am projecting. Rudder never actually specifically tells us what the rating is, how it’s exactly worded, and how the profiles are presented to the different groups. And that’s a problem, which he ignores completely until much later in the book when he mentions that how survey questions are worded can have a profound effect on how people respond, but his target is someone else’s survey, not his OKCupid environment.

    Words matter, and they matter differently for men and women. So for example, if there were a button for “eye candy,” we might expect women to choose more young men. If my guess is correct, and the term in use is “most attractive”, then for men it might well trigger a sexual concept whereas for women it might trigger a different social construct; indeed I would assume it does.

    Since this isn’t a porn site, it’s a dating site, we are not filtering for purely visual appeal; we are looking for relationships. We are thinking beyond what turns us on physically and asking ourselves, who would we want to spend time with? Who would our family like us to be with? Who would make us be attractive to ourselves? Those are different questions and provoke different answers. And they are culturally interesting questions, which Rudder never explores. A lost opportunity.

    Next, how does the recommendation engine work? I can well imagine that, once you’ve rated Profile A high, there is an algorithm that finds Profile B such that “people who liked Profile A also liked Profile B”. If so, then there’s yet another reason to worry that such results as Rudder described are produced in part as a result of the feedback loop engendered by the recommendation engine. But he doesn’t explain how his data is collected, how it is prompted, or the exact words that are used.

    Here’s a clue that Rudder is confused by his own facile interpretations: men and women both state that they are looking for relationships with people around their own age or slightly younger, and that they end up messaging people slightly younger than they are but not many many years younger. So forty year old men do not message twenty year old women.

    Is this sad sexual frustration? Is this, in Rudder’s words, the difference between what they claim they want and what they really want behind closed doors? Not at all. This is more likely the difference between how we live our fantasies and how we actually realistically see our future.

    Need to Control for Population

    Here’s another frustrating bit from the book: Rudder talks about how hard it is for older people to get a date but he doesn’t correct for population. And since he never tells us how many OKCupid users are older, nor does he compare his users to the census, I cannot infer this.

    Here’s a graph from Rudder’s book showing the age of men who respond to women’s profiles of various ages:

    dataclysm chart 1

    We’re meant to be impressed with Rudder’s line, “for every 100 men interested in that twenty year old, there are only 9 looking for someone thirty years older.” But here’s the thing, maybe there are 20 times as many 20-year-olds as there are 50-year-olds on the site? In which case, yay for the 50-year-old chicks? After all, those histograms look pretty healthy in shape, and they might be differently sized because the population size itself is drastically different for different ages.

    Confounding

    One of the worst examples of statistical mistakes is his experiment in turning off pictures. Rudder ignores the concept of confounders altogether, which he again miraculously is aware of in the next chapter on race.

    To be more precise, Rudder talks about the experiment when OKCupid turned off pictures. Most people went away when this happened but certain people did not:

    dataclysm chart 2

    Some of the people who stayed on went on a “blind date.” Those people, which Rudder called the “intrepid few,” had a good time with people no matter how unattractive they were deemed to be based on OKCupid’s system of attractiveness. His conclusion: people are preselecting for attractiveness, which is actually unimportant to them.

    But here’s the thing, that’s only true for people who were willing to go on blind dates. What he’s done is select for people who are not superficial about looks, and then collect data that suggests they are not superficial about looks. That doesn’t mean that OKCupid users as a whole are not superficial about looks. The ones that are just got the hell out when the pictures went dark.

    Race

    This brings me to the most interesting part of the book, where Rudder explores race. Again, it ends up being too blunt by far.

    Here’s the thing. Race is a big deal in this country, and racism is a heavy criticism to be firing at people, so you need to be careful, and that’s a good thing, because it’s important. The way Rudder throws it around is careless, and he risks rendering the term meaningless by not having a careful discussion. The frustrating part is that I think he actually has the data to have a very good discussion, but he just doesn’t make the case the way it’s written.

    Rudder pulls together stats on how men of all races rate women of all races on an attractiveness scale of 1-5. It shows that non-black men find their own race attractive and non-black men find black women, in general, less attractive. Interesting, especially when you immediately follow that up with similar stats from other U.S. dating sites and – most importantly – with the fact that outside the U.S., we do not see this pattern. Unfortunately that crucial fact is buried at the end of the chapter, and instead we get this embarrassing quote right after the opening stats:

    And an unintentionally hilarious 84 percent of users answered this match question:

    Would you consider dating someone who has vocalized a strong negative bias toward a certain race of people?

    in the absolute negative (choosing “No” over “Yes” and “It depends”). In light of the previous data, that means 84 percent of people on OKCupid would not consider dating someone on OKCupid.

    Here Rudder just completely loses me. Am I “vocalizing” a strong negative bias towards black women if I am a white man who finds white women and Asian women hot?

    Especially if you consider that, as consumers of social platforms and sites like OKCupid, we are trained to rank all the products we come across to ultimately get better offerings, it is a step too far for the detective on the other side of the camera to turn around and point fingers at us for doing what we’re told. Indeed, this sentence plunges Rudder’s narrative deeply into the creepy and provocative territory, and he never fully returns, nor does he seem to want to. Rudder seems to confuse provocation for thoughtfulness.

    This is, again, a shame. A careful conversation about the issues of what we are attracted to, what we can imagine doing, and how we might imagine that will look to our wider audience, and how our culture informs those imaginings, are all in play here, and could have been drawn out in a non-accusatory and much more useful way.


    _____

    Cathy O’Neil is a data scientist and mathematician with experience in academia and the online ad and finance industries. She is one of the most prominent and outspoken women working in data science today, and was one of the guiding voices behind Occupy Finance, a book produced by the Occupy Wall Street Alt Banking group. She is the author of “On Being a Data Skeptic” (Amazon Kindle, 2013), and co-author with Rachel Schutt of Doing Data Science: Straight Talk from the Frontline (O’Reilly, 2013). Her Weapons of Math Destruction is forthcoming from Random House. She appears on the weekly Slate Money podcast hosted by Felix Salmon. She maintains the widely-read mathbabe blog, on which this review first appeared.

    Back to the essay