boundary 2

Tag: Frank Pasquale

  • Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

    Sue Curry Jansen and Jeff Pooley — Neither Artificial nor Intelligent (review of Crawford, Atlas of AI, and Pasquale, New Laws of Robotics)

    a review of Kate Crawford, Atlas of AI Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale UP, 2021) and Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard UP, 2021)

    by Sue Curry Jansen and Jeff Pooley

    Artificial intelligence (AI) is a Faustian dream. Conceived in the future tense, its most ardent AI visionaries seek to create an enhanced form of intelligence that far surpasses the capacities of human brains. AI promises to transcend the messiness of embodiment, the biases of human cognition, and the limitations of mortality. Entering its eighth decade, AI is largely a science fiction, despite recent advances in machine learning. Yet it has captured the public imagination since its inception, and acquired potent ideological cache. Robots have become AI’s humanoid faces, as well as icons of popular culture: cast as helpful companions or agents of the apocalypse.

    The transcendent vision of artificial intelligence has educated, informed, and inspired generations of scientists, military strategists, policy makers, entrepreneurs, writers, artists, filmmakers, and marketers. However, apologists have also frequently invoked AI’s authority to mystify, intimidate, and silence resistance to its vision, teleology, and deployments. Where, for example, the threat of automation once triggered labor activism, rallying opposition to an esoteric branch of computer science research that few non-specialists understand is a rhetorical non-starter. So is campaigning for alternatives to smart apps, homes, cars, cities, borders, and bombs.

    Two remarkable new books, Kate Crawford’s The Atlas of AI and Frank Pasquale’s New Laws of Robotics: Defending Human Expertise in the Age of AI, provide provocative critical assessments of artificial intelligence in clear, accessible, and engaging prose. Both books have titles that could discourage novices, but they are, in fact, excellent primers for non-specialists on what is at stake in the current ascendency of AI science and ideology—especially if read in tandem.

    Crawford’s thesis—“AI is neither artificial nor intelligent”—cuts through the sci-fi hype to radically reground AI power-knowledge in material reality. Beginning with its environmental impact on planet Earth, her narrative proceeds vertically to demystify AI’s ways of seeing—its epistemology, methodology, and applications—and then to examine the roles of labor, ideology, the state, and power in the AI enterprise. She concludes with a coda on space and the astronautical illusions of digital billionaires. Pasquale takes a more horizontal approach, surveying AI in health care, education, media, law, policy, economics, war, and other domains. His attention is on the practical present—on the ethical dilemmas posed by current and near-future deployments of AI. His through line is that human judgment, backed by policy, should steer AI toward human ends.

    Despite these differences, Crawford and Pasquale converge on several critical points. First, they agree that AI models are skewed by economic and engineering values to the exclusion of other forms of knowledge and wisdom. Second, both endorse greater transparency and accountability in artificial intelligence design and practices. Third, they agree that AI datasets are skewed: Crawford focuses on how the use of natural language datasets, no matter how large, reproduce the biases of the populations they are drawn from, while Pasquale attends to designs that promote addictive engagement to optimize ad revenue. Fourth, both cite the residual effects of AI’s military origins on its logic, values, and rhetoric. Fifth, Crawford and Pasquale both recognize that AI’s futurist hype tends to obscure the real-world political and economic interests behind the screens—the market fundamentalism that models the world as an assembly line. Sixth, both emphasize the embodiment of intelligence, which encompasses tacit and muscle knowledge that cannot be fully extracted and abstracted by artificial intelligence modelers. Seventh, they both view artificial intelligence as a form of data-driven behaviorism, in the stimulus-response sense. Eighth, they acknowledge that AI and economic experts claim priority for their own views—a position they both reject.

    Crawford literally travels the world to map the topologies of computation, beginning in the lithium mines of Nevada, on to Silicon Valley, Indonesia, Malaysia, China, and Mongolia, and ending under personal surveillance outside of Jeff Bezos’ Blue Origin suborbital launch facility in West Texas. Demonstrating that AI is anything but artificial, she documents the physical toll it extracts from the environment. Contra the industry’s earth-friendly PR and marketing, the myth of clean tech and metaphors like ‘the Cloud,’ Crawford points out that AI systems are built upon consuming finite resources that required billions of years to take form: “we are extracting Earth’s geological history to serve a split second of contemporary technological time, building devices like the Amazon Echo and iPhone that are often designed to last only a few years.” And the Cloud itself leaves behind a gigantic carbon footprint. AI data mining is not only dependent on human miners of rare minerals, but also on human labor functioning within a “registry of power” that is unequal and exploitive— where “many valuable automated systems feature a combination of underpaid digital piece workers and customers taking on unpaid tasks to make systems function,” all the while under constant surveillance.

    While there is a deskilling of human labor, there are also what Crawford calls Potemkin AI systems, which only work because of hidden human labor—Bezos himself calls such systems “artificial artificial intelligence.” AI often doesn’t work as well as the humans it replaces, as, for example, in automated telephone consumer service lines. But Crawford reminds us that AI systems scale up: customers ‘on hold’ replace legions of customer service workers in large organizations. Profits trump service. Her chapters on data and classification strip away the scientistic mystification of AI and Big Data. AI’s methodology is simply data at scale, and it is data that is biased at inception because it is collected indiscriminately, as size, not substance, counts. A dataset extracted and abstracted from a society secured in systemic racism will, for example, produce racist results. The increasing convergence of state and corporate surveillance not only undermines individual privacy, but also makes state actors reliant on technologies that they cannot fully understand as machine learning transforms them. In effect, Crawford argues, states have made a “devil’s bargain” with tech companies that they cannot control. These technologies, developed for command-and-control military and policing functions, increasingly erode the dialogic and dialectic nature of democratic commons.

    AI began as a highly subsidized public project in the early days of the Cold War. Crawford demonstrates, however, that it has been “relentlessly privatized to produce enormous financial gains for the tiny minority at the top of the extraction pyramid.” In collaboration with Alex Campolo, Crawford has described AI’s epistemological flattening of complexity as “enchanted determinism,” whereby “AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life.”[1] In some deep learning systems, even the engineers who create these systems cannot interpret them. Yet, they cannot dismiss them either. In such cases, “enchanted determinism acquires an almost theological quality,” which tends to place it beyond critique of both technological utopians as well as dystopians.

    Pasquale, for his part, examines the ethics of AI as currently deployed and often circumvented in several contexts: medicine, education, media, law, military, and the political economy of automation, in each case in relation to human wisdom. His basic premise is that “we now have the means to channel technologies of automation, rather than being captured or transformed by them.” Like Crawford, then, he recommends exercising a resistant form of agency. Pasquale’s focus is on robots as automated systems. His rhetorical point of departure is a critique and revision of Isaac Asimov’s highly influential “laws of robotics,” developed in a 1942 short story—more than a decade before AI was officially launched in 1956. Because the world and law-making is far more complex than a short story, Pasquale finds Asimov’s laws ambiguous and difficult to apply, and proposes four new ones, which become the basis of his arguments throughout the book. They are:

    1. Robotic systems and AI should complement professionals, not replace them.
    2. Robotic systems and AI should not counterfeit humanity.
    3. Robotic systems and AI should not intensify zero-sum arms races.
    4. Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).

    ‘Laws’ entail regulation, which Pasquale endorses to promote four corresponding values: complementarity, authenticity, cooperation, and attribution. The four laws’ deployment depends on a critical distinction that Pasquale draws between technologies that replace people and those that assist us in doing our jobs better. Classic definitions of AI have sought to create computers that “can sense, think, and act like humans.” Pasquale endorses an “Intelligence Augmentation” (IA) alternative. This is a crucial shift in emphasis; it is Pasquale’s own version of AI refusal.

    He acknowledges that, in the current economy, “there are economic laws that tilt the scale toward AI and against IA.” In his view, deployment of robots may, however, offer an opportunity for humanistic intervention in AI’s hegemony, because the presence of robots, unlike phones, tablets, or sensors, is physically intrusive. They are there for a purpose, which we may accept or reject at our peril, but find hard to ignore. Robots are being developed to enter fields that are already highly regulated, which offers an opportunity to shape their use in ways that conform to established legal standards of privacy and consumer protection. Pasquale is an advocate for building humane (IA) values within the technology, before robots are released into the wild.

    In each of his topical chapters, he explains how robots and other AI systems designed to advance the values of complementarity, authenticity, cooperation, and attribution might enhance human existence and community. Some chapters stand out, as particularly insightful, including those on “automated media,” human judgment, and the political economy of automation. One of Pasquale’s chapters addresses important terrain that Crawford does not consider, medicine. Given past abuses by medical researchers in exploiting and/or ignoring race and gender, they may be especially sensitive and receptive to an IA intervention, despite the formidable economic forces stacked against it. Pasquale shows, for example, how IA has amplified diagnostics in dermatology through pattern recognition, providing insight into what distinguishes malignant from benign moles.

    In our view, Pasquale’s closing chapter endorsing human wisdom, as opposed to AI, displays multiple examples of the former. But some of their impact is blunted by more diffuse discussions of literature and art, valuable though those practices may be in counter-balancing the instrumental values of economics and engineering. Nonetheless, Pasquale’s argument is an eloquent tribute to a “human form of life that is fragile, embodied in mortal flesh, time-delimited, and irreproducible in silico.”

    The two books, read together, amount to a critique of AI ideology. Pasquale and Crawford write about the stuff that phrases like “artificial intelligence” and “machine learning” refer to, but their main concern is the mystique surrounding the words themselves. Crawford is especially articulate on this theme. She shows that, as an idea, AI is self-warranting. Floating above the undersea cables and rare-earth mines—ethereal and cloud-like—the discourse makes its compelling case for the future. Her work is to cut through the cloud cover, to reveal the mines and cables.

    So the idea of AI justifies even as it obscures. What Crawford and Pasquale draw out is that AI is a way of seeing the world—a lay epistemology. When we see the world through the lens of AI, we see extraction-ready data. We see countable aggregates everywhere we look. We’re always peering ahead, predicting the future with machinist probabalism. It’s the view from Palo Alto that feels like a god’s eye view. From up there, the continents look patterned and classification-ready. Earth-bound disorder is flattened into clear signal. What AI sees, in Crawford’s phrase, is a “Linnaean order of machine-readable tables.” It is, in Pasquale’s view, an engineering mindset that prizes efficiency over human judgment.

    At the same time, as both authors show, the AI lens refracts the Cold War national security state that underwrote the technology for decades. Seeing like an AI means locating targets, assets, and anomalies. Crawford calls it a “covert philosophy of en masse infrastructural command and control,” a martial worldview etched in code.

    As Kenneth Burke observed, every way of seeing is also a way of not seeing. What AI can’t see is also its raw material: human complexity and difference. There is, in AI, a logic of commensurability—a reduction of messy and power-laden social life into “computable sameness.” So there is a connection, as both Crawford and Pasquale observe, between extraction and abstraction. The activity of everyday life is extracted into datasets that, in their bloodless tabulation, abstract away their origins. Like Marx’s workers, we are then confronted by the alienated product of our “labor”—interviewed or consoled or policed by AIs that we helped build.

    Crawford and Pasquale’s excellent books offer sharp and complementary critiques of the AI fog. Where they differ is in their calls to action. Pasquale, in line with his mezzo-level focus on specific domains like education, is the reformist. His aim is to persuade a policy community that he’s part of—to clear space between do-nothing optimists and fatalist doom-sayers. At core he hopes to use law and expertise to rein in AI and robotics—with the aim to deploy AI much more conscientiously, under human control and for human ends.

    Crawford is more radical. She sees AI as a machine for boosting the power of the already powerful. She is skeptical of the movement for AI “ethics,” as insufficient at best and veering toward exculpatory window-dressing. The Atlas of AI ends with a call for a “renewed politics of refusal,” predicated on a just and solidaristic vision of the future.

    It would be easy to exaggerate Crawford and Pasquale’s differences, which reflect their projects’ scope and intended audience more than any disagreement of substance. Their shared call is to see AI for what it is. Left to follow its current course, the ideology of AI will reinforce the bars on the “iron cage” that sociologist Max Weber foresaw a century ago: incarcerating us in systems of power dedicated to efficiency, calculation, and control.

    _____

    Sue Curry Jansen is Professor of Media & Communication at Muhlenberg College, in Allentown, PA. Jeff Pooley is Professor of Media & Communication at Muhlenberg, and director of mediastudies.press, a scholar-led publisher. Their co-authored essay on Shoshanna Zuboff’s Surveillance Capitalism—a review of the book’s reviews—recently appeared in New Media & Society.

    Back to the essay

    _____

    Notes

    [1] Crawford acknowledges the collaboration with Campolo, her research assistant, in developing this concept and the chapter on affect, generally.

  • Chris Gilliard and Hugh Culik — The New Pythagoreans

    Chris Gilliard and Hugh Culik — The New Pythagoreans

    Chris Gilliard and Hugh Culik

    A student’s initiation into mathematics routinely includes an encounter with the Pythagorean Theorem, a simple statement that describes the relationship between the hypotenuse and sides of a right triangle: the sum of the squares of the sides is equal to the square of the hypotenuse, i.e., A2 + B2 = C2. The statement and its companion figure of a generic right triangle are offered as an interchangeable, seamless flow between geometric “things” and numbers (Kline 1980, 11). Among all the available theorems that might be offered as emblematic of mathematics, this one is held out as illustrative of a larger claim about mathematics and the Real. This use suggests that it is what W. J. T. Mitchell would call a “hypericon,” a visual paradigm that doesn’t “merely serve as [an] illustration to theory; [it] picture[s] theory” (1995, 49). Understood in this sense, the Pythagorean Theorem asserts a central belief of Western culture: that mathematics is the voice of an extra-human realm, a realm of fundamental, unchanging truth apart from human experience, culture, or biology. Pythagorean theoremIt is understood as more essential than the world and as prior to it. Mathematics becomes an outlier among representational systems because numbers are claimed to be “ideal forms necessarily prior to the material ‘instances’ and ‘examples’ that are supposed to illustrate them and provide their content” (Rotman 2000, 147).[1] The dynamic flow between the figure of the right triangle and the formula transforms mathematical language into something akin to Christian concepts of a prelapsarian language, a “nomenclature of essences, in which word would have reflected thing with perfect accuracy” (Eagle 2007, 184). As the Pythagoreans styled it, the world is number (Guthrie 1962, 256). The image schools the child into the culture’s uncritical faith in the rhetoric of numbers, a sort of everyman’s version of the Pythagorean vision. Whatever the general belief in this notion, the nature of mathematical representations has been a central problematic of mathematics that appears throughout its history. The difference between the historical significance of this problematic and its current manifestation in the rhetoric of “Big Data” illustrates an important cultural anxiety.

    Contemporary culture uses the Pythagorean Theorem’s image and formula as a hypericon that not only obscures problematic assumptions about the consistency and completeness of mathematics, but which also misrepresents the consistency and completeness of the material-world relationships that mathematics is used to describe.[2] This rhetoric of certainty, consistency, and completeness continues to infect contemporary political and ideological claims. For example, “Big Data” enthusiasts – venture capitalists, politicians, financiers, education reformers, policing strategists, et al. – often invoke a neo-Pythagorean worldview to validate their claims, claims that rest on the interplay of technology, analysis, and mythology (Boyd and Crawford 2012, 663). What is a highly productive problematic in the 2,500-year history of mathematics disappears into naïve assertions about the inherent “truth” of the algorithmic outputs of mathematically based technologies. When corporate behemoths like Pearson and Knewton (makers of an adaptive learning platform) participate in events such as the Department of Education’s 2012 “Datapalooza,” the claims become totalizing. Knewton’s CEO, Jose Ferreira, asserts, in a crescendo of claims, that “Knewton gets 5-10 million actionable data points per student per day”; and that tagging content “unlocks data.” In his terms, “work cascades out data” that is then subject to the various models the corporation uses to predict and prescribe the future. His claims of descriptive completeness are correct, he asserts, because “everything in education is correlated to everything else” (November 2012). The narrative of Ferreira’s claims is couched in fluid equivalences of data points, mathematical models, and a knowable future. Data become a metonym for not only the real student, but for the nature of learning and human cognition. In a sort of secularized predestination, the future’s origin in perfectly representational numbers produces perfect predictions of students’ performance. Whatever the scale of the investment dollars behind these New Pythagoreans, such claims lose their patina of objective certainty when placed in the history of the West’s struggle with mathematized claims about a putative “real.” For them, predictions are not the outcomes of processes; rather, predictions are revelations of a deterministic reality.[3]

    A recent claim for a facial-recognition algorithm that identifies criminals normalizes its claims by simultaneously asserting and denying that “in all cultures and all periods of recorded human history, [is] the belief that the face alone suffices to reveal innate traits of a person” (Wu, Xiaolin, and Xi Zhang 2016, 1) The authors invoke the Greeks:

    Aristotle in his famous work Prior Analytics asserted, ‘It is possible to infer character from features, if it is granted that the body and the soul are changed together by the natural affections’ (1)

    The authors then remind readers that “the same question has captivated professionals (e.g., psychologists, sociologists, criminologists) and amateurs alike, across all cultures, and for as long as there are notions of law and crime. Intuitive speculations are abundant both in writing . . . and folklore.” Their work seeks to demonstrate that the question yields to a mathematical model, a model that is specifically a non-human intelligence: “In this section, we try to answer the question in the most mechanical and scientific way allowed by the available tools and data. The approach is to let a machine learning method explore the data and reveal the most discriminating facial features that tell apart criminals and non-criminals” (6). The rhetoric solves the problem by asserting an unchanging phenomenon – the criminal face – by invoking a mathematics that operates via machine learning. Problematic crimes such as “DWB” (driving while black) disappear along with history and social context.

    Such claims rest on confused and contradictory notions. For the Pythagoreans, mathematics was not a representational system. It was the real, a reality prior to human experience. This claim underlies the authority of mathematics in the West. But simultaneously, it effectively operates as a response to the world, i.e., it is a re-presentation. As re-presentational, it becomes another language, and like other languages, it is founded on bias, exclusions, and incompleteness. These two notions of mathematics are resolved by seeing the representation as more “real” than the multiply determined events it re-presents. Nonetheless, once we say it re-presents the real, it becomes just another sign system that comes after the real. Often, bouncing back and forth between its extra-human status and its representational function obscures the places where representation fails or becomes an approximation. To data fetishists, “data” has a status analogous to that of “number” in the Pythagorean’s world. For them, reality is embedded in a quasi-mathematical system of counting, measuring, and tagging. But the ideological underpinnings, pedagogical assumptions, and political purposes of the tagging go unremarked; to do so would problematize the representational claims. Because the world is number, coders are removed from the burden of history and from the responsibility to examine the social context that both creates and uses their work.

    The confluence of corporate and political forces validates itself through mathematical imagery, animated graphics, and the like. Terms such as “data-driven” and “evidence-based” grant the rhetoric of numbers a power that ignores its problematic assumptions. There is a pervasive refusal to recognize that data are artifacts of the descriptive categories imposed on the world. But “Big Data” goes further; the term is used in ways that perpetuate the antique notion of “number” by invoking numbers as distillations of certainty and a knowable universe. “Number” becomes decontextualized and stripped of its historical, social, and psychological origins. Because the claims of Big Data embed residual notions about the re-presentational power of numbers, and about mathematical completeness and consistency, they speak to such deeply embedded beliefs about mathematics, the most fundamental of which is the Pythagorean claim that the world is number. The point is not to argue whether mathematics is formal, referential, or psychological; rather, it is to place contemporary claims about “Big Data” in historical and cultural contexts where such issues are problematized. The claims of Big Data speak through a language whose power rests on longstanding notions of mathematics; however, these notions lose some of their power when placed in the context of mathematical invention (Rotman 2000, 4-7).

    “Big Data” represents a point of convergence for residual mathematical beliefs, beliefs that obscure cultural frameworks and thus interfere with critique. For example, predictive policing tools are claimed to produce neutral, descriptive acts using machine intelligence. Berk asserts that “if you let the computer just snoop around in the dataset, it finds things that are unexpected by existing theory and works really substantially well to help forecast” (Berk 2011). In this view, Big Data – the numerical real – can be queried to produce knowledge that is not driven by any theoretical or ideological interest. Precisely because the world is presumed to be mathematical, the political, economic, and cultural frameworks of its operation can become the responsibility of the algorithm’s users. To this version of a mathematized real, there is no inherently ethical algorithmic action prior to the use of its output. Thus, the operation of the algorithm is doubly separated from its social contexts. First, the mathematics themselves are conceived as autonomous embodiments of a reality independent of the human; second, the effects of the algorithm – its predictions – are apart from values, beliefs, and needs that create the algorithm. The specific limits of historical and social context do not mathematically matter; the limits are determined by the values and beliefs of the algorithm’s users. The problematics of mathematizing the world are passed off to its customers. Boyd and Crawford identify three interacting phenomena that create the notion of Big Data: technology, analysis, and mythology (2012, 663). The mythological element embodies both dystopian and utopian narratives, and thus how we categorize reality. O’Neil notes that “these models are constructed not just from data but from the choices we make about which data to pay attention to – and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral” (2016, 218). On one hand, the predictive value depends on the moral, ethical, and political values of the user, a non-mathematical question. On the other hand, this division between the model and its application carves out a special arena where the New Pythagoreans claim that it operates without having to recognize social or historical contexts.

    Whatever their commitment to number, the Pythagoreans were keenly aware that their system was vulnerable to discoveries that problematized their basic claim that the world is number. And they protected their beliefs through secrecy and occasionally through violence. Like the proprietary algorithms of contemporary corporations, their work was reserved for a circle of adepts/owners. First among their secrets was the keen understanding that an unnamable point on the number line would represent a rupture in the relationship of mathematics and world. If that relationship failed, with it would go their basis for belief in a knowable world. Their claims arose from within the concrete practices of Greek mathematics. For example, the Greeks portrayed numbers by a series of dots called Monads. The complex ratios used to describe geometric figures were understood to generate the world, and numbers were visualized in arrangements of stones (calculi). A 2 x 2 arrangement of stones had the form of a square, hence the term “square numbers.” Thus, it was a foundational claim that any point or quantity (because monads were conceived as material objects) have a corresponding number. Line segments, circumferences, and all the rest had to correspond to what we still call the “rational numbers”: 1, 2, 3 . . . and their ratios. Thus, the Pythagorean’s great claim – that the world is number – was vulnerable to the discovery of a point on the number line that could not be named as the ratio of integers.

    Unfortunately for their claim, such numbers are common, and the great irony of the Pythagorean Theorem lies in the fact that it routinely generates numbers that are not ratios of integers. For example, a right triangle with sides one-unit long has a hypotenuse √2 units long (12 + 12 = C2 i.e., 2 = C2 i.e., √2 = C). Numbers such as √2 contradict the mathematical aspiration toward a completely representational system because they cannot be expressed as a ratio of integers, and hence their status as what are called “ir-rational” numbers.[4] A relatively simple proof demonstrates that they are neither odd nor even; these numbers exist in what is called a “surd” relationship to the integers, that is, they are silent – the meaning of “surd” – about each other. They literally cannot “speak” to each other. To the Pythagoreans, this appeared as a discontinuity in their naming system, a gap that might be the mark of a world beyond the generative power of number. Such numbers are, in fact, a new order of naming precipitated by the limited representational power of the prior naming system based on real numbers. But for the Pythagoreans, to look upon these numbers was to look upon the void, to discover that the world had no intrinsic order. Irrational numbers disrupted the Pythagorean project of mathematizing reality. This deeply religious impulse toward order underlies the aspiration that motivates the bizarre and desperate terminologies of contemporary data fetishists: “data-driven,” “evidence-based,” and even “Big Data,” which is usually capitalized to show the reification of number it desires.

    Big Data appeals to a mathematical nostalgia for certainty that cannot be sustained in contemporary culture. O’Neil provides careful examples of how history, social context, and the data chosen for algorithmic manipulation do not – indeed cannot – matter in this neo-Pythagorean world. Like Latour, she historicizes the practices and objects that the culture pretends are natural. The ideological and political nature of the input becomes invisible, especially when algorithms are granted special proprietary status that converts them to what Pasquale calls a “black box” (2016). It is a problematic claim, but it can be made without consequence because it speaks in the language of an ancient mathematical philosophy still heard in our culture,[5] especially in education where the multifoliate realities of art, music, and critical writing are quashed by forces such as the Core Curriculum and its pervasive valorization of standardization. Such strategies operate in fear of the inconsistency and incompleteness of any representational relationship, a fear of epistemological silence that has lurked in the background of Western mathematics from its beginnings. To the Greeks, the irrationals represented a sort of mathematical aphasia. The irrational numbers such as √2 thus obtained emblematic values far beyond their mathematical ones. They inserted an irremediable gap between the world and the “word” of mathematics. Such knowledge was catastrophic – adepts were murdered for revealing the incommensurability of side and diagonal.[6] More importantly, the discovery deeply fractured mathematics itself. The gap in the naming system split mathematics into algebra (numerical) and geometry (spatial), a division that persisted for almost 2,000 years. Little wonder that the Greeks restricted geometry to measurements that were not numerical, but rather were produced through the use of a straightedge and compass. Physical measurement by line segments and circles rather than by a numerical length effectively sidestepped the threat posed by the irrational numbers. Kline notes, “The conversion of all of mathematics except the theory of whole numbers into geometry . . . forced a sharp separation between number and geometry . . . at least until 1600” (1980, 105). Once we recognize that the Pythagorean theorem is a hypericon, i.e., a visual paradigm that visualizes theory, we begin to see its extension into other fundamental mathematical “discoveries” such as Descartes’s creation of coordinate geometry. A deep anxiety about the gap between word and world is manifested in both mathematics as well as in contemporary claims about “Big Data.”

    The division between numerical algebra and spatial geometry remained a durable feature of Western mathematics until problematized by social change. Geometry offered an elegant axiomatic system that satisfied the hierarchical impulse of the culture, and it worked in concert with the Aristotelian logic that dominated notions of truth. The Aristotelian nous and the Euclidian axioms seemed similar in ways that justified the hierarchical structure of the church and of traditional politics. They were part of a social fabric that bespoke an extra-human order that could be dis-covered. But with the rise of commercial culture came the need for careful records, computations, risk assessments, interest calculations, and other algebraic operations. The tension between algebra and geometry became more acute and visible. It was in this new cultural setting that Descartes’s work appeared. Descartes’s 1637 publication of La Géométrie confronted the terrors revealed in the irrationals embodied in the geometry/algebra divide by subordinating both algebra and geometry to a more abstract relationship. Turchin notes that Descartes re-unified geometry and arithmetic not by granting either priority or reducing either to the other; rather, in his language “the symbols do not designate number or quantities, but relations of quantities” (Turchin 1977, 196).

    Rotman directly links concepts of number to this shifting relationship of algebra and geometry and even to the status of numbers such as zero:

    During the fourteenth century, with the emergence of mercantile / capitalism in Northern Italy, the handling of numbers passed . . . to merchants, artisan-scientists, architects . . . for whom arithmetic was an essential prerequisite for trade and technology . . . . The central role occupied by double-entry book-keeping (principle of the zero balance) and the calculational demands of capitalism broke down any remaining resistance to the ‘infidel symbol’ of zero. (1987, 7-8)

    The emergence of the zero is an index to these changes, not the revelation of a pre-existing, extra-human reality. Similarly, Alexander’s history of the calculus places its development in the context of Protestant notions of authority (2014, 140-57). He emphasizes that the methodologies of the sciences and mathematics began to serve as political models for scientific societies: “if reasonable men of different backgrounds and convictions could meet to discuss the workings of nature, why could they not do the same in matters that concerned the state?” (2014, 249). Again, in the case of the calculus, mathematics responds to the emerging forces of the Renaissance: individualism, capitalism, and Protestantism. Certainly, the ongoing struggle with irrational numbers extends from the Greeks to the Renaissance, but the contexts are different. For the Greeks, the generative nature of number was central. For 17th Century Europe, the material demands of commercial life converged with religious, economic, and political shifts to make number a re-presentational tool.

    The turmoil of that historical moment suggests the turmoil of our own era in the face of global warfare, climate change, over-population, and the litany of other catastrophes we perpetually await.[7] In both cases, the anxiety produces impulses to mathematize the world and thereby reveal a knowable “real.” The current corporate fantasy that the world is a simulation is the fantasy of non-mathematicians (Elon Musk and Sam Altman) to embed themselves in a techno-centric narrative of the power of their own tools to create themselves. While this inexpensive version of Baudrillard’s work might seem sophomoric, it nevertheless exposes the impulse to contain the visceral fear that a socially constructed world is no different from solipsism’s chaos. It seems a version of the freshman student’s claim that “Everything’s just opinion” or the plot of another Matrix film. They speak/act/claim that their construction of meaning is equal to any other — the old claim that Hitler and Mother Teresa are but two equally valid “opinions”. They don’t know that the term/concept is social construction, and their radical notions of the individual prevent them from recognizing the vast scope, depth, and stabilizing power of social structures. They are only the most recent example of how social change exacerbates the misuse of mathematics.

    Amid these sorts of epistemic shifts, Renaissance mathematics underwent its own transformations. Within a fifty-year span (1596-1646), Descartes, Newton, and Leibniz are born. Their major works appear, respectively, in 1637, 1666, and 1675, a burst of innovation that cannot be separated from the shifts in education, economics, religion, and politics that were then sweeping Europe. Porter notes that statistics emerges alongside the rising modern state of this era. Managing the state’s wealth required profiles of populations. Such mathematical profiling began in the mid-1600s, with the intent to describe the state’s wealth and human resources for the creation of “sound, well-informed state policy” (Porter 1986, 18). The notion of probabilities, samples, and models avoids the aspirations that shaped earlier mathematics by making mathematics purely descriptive. Hacking suggests that the delayed appearance of probability arises from five issues: 1) an obsession with determinism and personal fatalism; 2) the belief that God spoke through randomization and thus, a theory of the random was impious; 3) the lack of equiprobable events provided by standardized objects, e.g., dice; 4) the lack of economic drivers such as insurances and annuities; and 5) the lack of a workable calculus needed for the computation of probability distributions (Davis and Hersh 1981, 21). Hacking finds these insufficient and suggests that as authority was relocated in nature and not in the words of authorities, this led to the observation of frequencies.[8] Alongside the fierce opposition of the Church to the zero, understood as the absence of God, and to the calculus, understood as an abandonment of material number, the shifting mathematical landscape signals the changes that began to affect the longstanding status of number as a sort of prelapsarian language.

    Mathematics was losing its claims to completeness and consistency, and the incommensurables problematized that. Newton and Leibniz “de-problematized” irrationals, and opened mathematics to a new notion of approximation. The central claims about mathematics were not disproved; worse, they were set aside as unproductive conflations of differences between the continuous and the discrete. But because the church saw mathematics as “true” in a fashion inextricable from other notions of the truth, it held a special status. Calculus became a dangerous interest likely to call the Inquisition to action. Alexander locates the central issue as the irremediable conflict between the continuous and the discrete, something that had been the core of Zeno’s paradoxes (2014). The line of mathematical anxieties stretches from the Greeks into the 17th Century. These foundational understandings seem remote and abstract until we see how they re-appear in the current claims about the importance of “Big Data.” The term legitimates its claims by resonating with other responses to the anxiety of representation.

    The nature of the hypericon perpetuates the notion of a stable, knowable reality that rests upon a non-human order. In this view, mathematics is independent of the world. It existed prior to the world and does not depend on the world; it is not an emergent narrative. The mathematician discovers what is already there. While this viewpoint sees mathematics as useful, mathematics is prior to any of its applications and independent of them. The parallel to religious belief becomes obvious if we substitute the term “God” for “mathematics”; the notions of a self-existing, self-knowing, and self-justifying system are equally applicable (Davis and Hersh 1981, 232-3). Mathematics and religion share in a fundamental Western belief in the Ideal. Taken together, they reveal a tension between the material and the eternal that can be mediated by specific languages. There is no doubt that a simplified mathematics serves us when we are faced with practical problems such as staking out a rectangular foundation for a house, but beyond such short-term uses lie more consequential issues, e.g., the relation of the continuous and the discrete, and between notions of the Ideal and the socially constructed. These larger paradoxes remain hidden when assertions of completeness, consistency, and certainty go unchallenged. In one sense, the data fetishists are simply the latest incarnation of a persistent problem: understanding mathematics as culturally situated.

    Again, historicizing this problem addresses the widespread willingness to accept their totalistic claims. And historicizing these claims requires a turn to established critical techniques. For example, Rotman’s history of the zero turns to Derrida’s Of Grammatology to understand the forces that complicated and paralyzed the acceptance of zero into Western mathematics (1987). He turns to semiotics and to the work of Ricoeur to frame his reading of the emergence of the zero in the West during the Renaissance. Rotman, Alexander, desRaines, and a host of mathematical historians recognize that the nature of mathematical authority has evolved. The evolution lurks in the role of the irrational numbers, in the partial claims of statistics, and in the approximations of the calculus. The various responses are important as evidence of an anxiety about the limits of representation. The desire to resolve such arguments seems revelatory. All share an interest in the gap between the aspirations of systematic language and its object: the unnamable. That gap is iconic, an emblem of its limits and the functions it plays in the generation of novel responses to the threat of an inarticulable void; its history exposes the powerful attraction of the claims made for Big Data.

    By the late 1800s, questions of systematic completeness and consistency grew urgent. For example, they appeared in the competing positions of Frege and Hilbert, and they resonated in the direction David Hilbert gave to 20th Century mathematics with his famed 23 questions (Blanchette 2014). The second of these specifically addressed the problem of proving that mathematical systems could be both complete and consistent. This question deeply influenced figures such as Bertrand Russell, Ludwig Wittgenstein, and others.[9] Hilbert’s question was answered in 1931 by Gödel’s theorems that demonstrated the inherent incompleteness and inconsistency of arithmetic systems. Gödel’s first theorem demonstrated that axiomatic systems would necessarily have true statements that could be neither proven nor disproven; his second theorem demonstrated that such systems would necessarily be inconsistent. While mathematicians often take care to note that his work addresses a purely mathematical problem, it nevertheless is read metaphorically. As a metaphor, it connects the problematic relationship of natural and mathematical languages. This seems inevitable because it led to the collapse of the mathematical aspiration for a wholly formal language that does not require what is termed ‘natural’ language, that is, for a system that did not have to reach outside of itself. Just as John Craig’s work exemplifies the epistemological anxieties of the late eighteenth century,[10] so also does Gödel’s work identify a sustained attempt of his own era to demonstrate that systematic languages might be without gaps.

    Gödel’s theorems rely on a system that creates specialized numbers for symbols and the operations that relate them. This second-order numbering enabled him to move back and forth between the logic of statements and the codes by which they were represented. His theorems respond to an enduring general hope for complete and consistent mappings of the world with words, and each embeds a representational failure.  Craig was interested in the loss of belief in the gospels; Pythagoras feared the gaps in the number line represented by the irrational numbers, and Gödel identified the incompleteness and inconsistency of axiomatic systems. To the dominant mathematics of the early 20th Century, the value of the question to which Gödel addresses himself lies in the belief that an internally complete mathematical map would be the mark of either of two positions: 1) the purely syntactic orderliness of mathematics, one that need not refer to any experiential world (this is the position of Frege, Russell, and Hilbert); or 2) the emergence of mathematics alongside concrete, human experience. Goldstein argues that these two dominant alternatives of the late eighteenth and early twentieth centuries did not consider the aprioricity of mathematics to constitute an important question, but Gödel offered his theorems as proofs that served exactly that idea. His demonstration of incompleteness does not signal a disorderly cosmos; rather, it argues that there are arithmetic truths that lie outside of formalized systems; as Goldstein notes, “the criteria for semantic truth could be separated from the criteria for provability” (2006, 51). This was an argument for mathematical Platonism. Goldstein’s careful discussion of the cultural framework and the meta-mathematical significance of Gödel’s work emphasizes that it did not argue for the absence of any extrinsic order to the world (51). Rather, Gödel was consciously demonstrating the defects in a mathematical project begun by Frege, addressed in the work of Russell and Whitehead, and enshrined by Hilbert as essential for converting mathematics into a profoundly isolated system whose orderliness lay in its internal consistency and completeness.[11] Similarly, his work also directly addressed questions about the a priori nature of mathematics challenged by the Vienna Circle. Paradoxically, by demonstrating that a foundational system – arithmetic – was not consistent and complete, the argument that mathematics was simply a closed, self-referential system could be challenged and opened to meta-mathematical claims about epistemological problems.

    Gödel’s work, among other things, argues for essential differences between human thought and mathematics. Gödel’s work has become imbricated in a variety of discourses about representation, the nature of the mind, and the nature of language. Goldstein notes:

    The structure of Gödel’s proof, the use it makes of ancient paradox [the liar’s paradox], speaks at some level, if only metaphorically, to the paradoxes in the tale that the twentieth century told itself about some of its greatest intellectual achievements – including, of course, Gödel’s incompleteness theorems. Perhaps someday a historian of ideas will explain the subjectivist turn taken by so many of the last century’s most influential thinkers, including not only philosophers but hard-core scientists, such as Heisenberg and Bohr. (2006, 51)

    At the least, his work participated in a major consideration of three alternative understandings of symbolic systems: as isolated, internally ordered syntactic systems, as accompaniments of experience in the material world, or as the a priori realities of the Ideal. Whatever the immensely complex issues of these various positions, Gödel is the key meta-mathematician/logician whose work describes the limits of mathematical representation through an elegant demonstration that arithmetic systems – axiomatic systems – were inevitably inconsistent and incomplete. Depending on one’s aspirations for language, this is either a great catastrophe or an opening to an infinite world of possibility where the goal is to deploy a paradoxical stance that combines the assertion of meaning with its cancellation. This double position addresses the problem of representational completeness.

    This anxiety became acute during the first half of the twentieth century as various discourses deployed strategies that exploited this heightened awareness of the intrinsic incompleteness and inconsistency of systematic knowledge. Whatever their disciplinary differences – neurology, psychology, mathematics – they nonetheless shared the sense that recognizing these limits was an opportunity to understand discourse both from within narrow disciplinary practices and from without in a larger logical and philosophical framework that made the aspiration toward completeness quaint, naïve, and unproductive. They situated the mind as a sort of boundary phenomenon between the deployment of discourses and an extra-linguistic reality. In contrast to the totalistic claims of corporate spokesmen and various predictive software, this sensibility was a recognition that language might always fail to re-present its objects, but that those objects were nonetheless real and expressible as a function of the naming process viewed from yet another position. An important corollary was that these gaps were not only a token for the interplay of word and world, but were also an opportunity to illuminate the gap itself. In short, symbol systems seemed to stand as a different order of phenomena than whatever they proposed to represent, and the result was a burst of innovative work across a variety of disciplines.

    Data enthusiasts sometimes participate in a discredited mathematics, but they do so in powerfully nostalgic ways that resonate with the amorphous Idealism infused in our hierarchical churches, political structures, aesthetics, and epistemologies. Thus, Big Data enthusiasts speak through the residue of a powerful historical framework to assert their own credibility. For these New Pythagoreans, mathematics remains a quasi-religious undertaking whose complexity, consistency, sign systems, and completeness assert a stable, non-human order that keeps chaos at bay. However, they are stepping into an issue more fraught than simply the misuses and misunderstanding of the Pythagorean Theorem. The historicized view of mathematics and their popular invocation of mathematics diverge at the point that anxieties about the representational failure of languages become visible. We not only need to historicize our understanding of mathematics, but also to identify how popular and commercial versions of mathematics are nostalgic fetishes for certainty, completeness, and consistency. Thus, the authority of algorithms has less to do with their predictive power than their connection to a tradition rooted in the religious frameworks of Pythagoreanism. Critical methods familiar to the humanities – semiotics, deconstruction, psychology – build a sort of critical braid that not only re-frames mathematical inquiry, but places larger question about the limits of human knowledge directly before us; this braid forces an epistemological modesty that is eventually ethical and anti-authoritarian in ways that the New Pythagoreans rarely are.

    Immodest claims are the hallmark of digital fetishism, and are often unabashedly conscious. Chris Anderson, while Editor-in-Chief of Wired magazine, infamously argued that “the data deluge makes the scientific method obsolete” (2008). He claimed that distributed computing, cloud storage, and huge sets of data made traditional science outmoded. He asserted that science would become mathematics, a mathematical sorting of data to discover new relationships:

    At the petabyte scale, information is not a matter of simple three and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later.

    “Agnostic statistics” would be the mechanism that for precipitating new findings. He suggests that mathematics is somehow detached from its contexts and represents the real through its uncontaminated formal structures. In Anderson’s essay, the world is number. This neo-Pythagorean claim quickly gained attention, and then wilted in the face of scholarly response such as that of Pigliucci (2009, 534).

    Anderson’s claim was both a symptom and a reinforcement of traditional notions of mathematics that extend far back into Western history. Its explicit notions of mathematics stirred two kinds of anxiety: one reflected a fear of a collapsed social project (science) and the other reflected a desperate hunger for a language – mathematics – that penetrated the veil drawn across reality and made the world knowable. Whatever the collapse of his claim, similar ones such as those of the facial phrenologists continue to appear. Without history – mathematical, political, ideological – “data” acquires a material status much as number did for the Greeks, and this status enables statements of equality between the messiness of reality and the neatness of formal systems. Part of this confusion is a common misunderstanding of the equals sign in popular culture. The “sign” is a relational function, much as the semiotician’s signified and signifier combine to form a “sign.” However, when we mistreat treat the “equals sign” as a directional, productive operation, the nature of mathematics loses its availability to critique. It becomes a process outside of time that generates answers by re-presenting the real in a language. Where once a skeptical Pythagorean might be drowned for revealing the incommensurability of side and diagonal, proprietary secrecy now threatens a sort of legalized financial death for those who violate copyright (Pasquale 2016, 142). Pasquale identifies the “creation of invisible powers” as a hallmark of contemporary, algorithmic culture (2016, 193). His invaluable work recovers the fact that algorithms operate in a network of economic, political, and ideological frameworks, and he carefully argues the role of legal processes in resisting the control that algorithms can impose on citizens.

    Pasquale’s language is not mathematical, but it shares with scholars like Rotman and Goldstein an emphasis on historical and cultural context. The algorithm is made accountable if we think of it as an act whose performance instantiates digital identities through powerful economic, political, and ideological narratives. The digitized individual does not exist until it becomes the subject of such a performance, a performance which is framed much as any other performance is framed: by the social context, by repetition, and through embodiment. Digital individuals come into being when the algorithmic act is performed, but they are digital performances because of the irremediable gap between any object and its re-presentation. In short, they are socially constructed. This would be of little import except that these digital identities begin as proxies for real bodies, but the diagnoses and treatments are imposed on real, social, psychological, flesh beings. The difference between digital identity and human identity can be ignored if the mathematized self is isomorphic with the human self. Thus, algorithmic acts entangle the input > algorithm > output sequence by concealing layers of problematic differences: digital self and human self; mathematics and the Real; test inputs and test outputs, scaling, and input and output. The sequence loses its tidy sequential structure when we recognize that the outputs are themselves data and often re-enter the algorithm’s computations by their transfer to third parties whose information returns for re-processing. A somewhat better version of the flow would be data1 > algorithm > output > data2 > algorithm > output > data3. . . . with the understanding that any datum might re-enter the process. The sequence suggests how an object is both the subject of its context and a contributor to that context. The threat of a constricting output looms precisely because there is a decreasing room for what de Certeau calls “le perruque” (1988, 25), i.e, the inefficiencies where unplanned innovation appears. And like any text, it requires a variety of analytic strategies.

    We have learned to think of algorithms in directional terms. We understand them as transformative processes that operate upon data sets to create outputs. The problematic relationships of data > algorithm > output become even more visible when we recognize that data sets have already been collected according to categories and processes that embody political, economic, and ideological biases. The ideological origin of the collected data – the biases of the questions posed in order to generate “inputs” – are yet another kind of black box, a box prior to the black box of the algorithm, a prior structure inseparable from the algorithm’s hunger for (using the mathematicians’ language) a domain upon which it can act to produce a range of results. The nature of the algorithm controls what items from the domain (data set) can be used, and on the other hand, the nature of the data set controls what the algorithm has available to act upon and transform into descriptive and prescriptive claims. The inputs are as much a black box as the algorithm itself. Thus, opaque algorithms operate upon opaque data sets (Pasquale 2016, 204) in ways that nonetheless embody the inescapable “politics of large numbers” that is the topic of Desrosières and Naish’s history of statistical reasoning (2002). This interplay forces us to recognize that the algorithm inherits biases, and that then they are compounded by operations within these two algorithmic boxes to become doubly biased outputs. It might be more revelatory to term the algorithmic process as “stimuli” > algorithm > “responses.” Re-naming “input” as “stimuli” emphasizes the selection process that precedes the algorithmic act; re-naming “output” as “response” establishes the entire process as human, cultural, and situated. This is familiar territory to psychology. Digital technologies are texts whose complexity emerges when approached using established tools for textual analysis. Rotman and other mathematicians directly state their use of semiotics. They turn to phenomenology to explicate the reader/writer interaction, and they approach mathematical texts with terms like narrator, self-referential and recursion. Most of all, they explore the problem of mathematical representation when mathematics itself is complicated by its referential, formal, and psychological statuses.

    The fetishization of mathematics is a fundamental strategy for exempting digital technologies from theory, history, and critique. Two responses are essential: first, to clarify the nostalgic mathematics at work in the mathematical rhetoric of Big Data and its tools; and second, to offer analogies that step beyond naïve notions of re-presentation to more productive critiques. Analogy is essential because analogy is itself a performance of the anti-representational claim that digital technologies need to be understood as socially constructed by the same forces that instantiate any technology. Bruno Latour frames the problem of the critical stance as three-dimensional:

    The critics have developed three distinct approaches to talking about our world: naturalization, socialization and deconstruction . . . . When the first speaks of naturalized phenomena, then societies, subjects, and all forms of discourse vanish. When the second speaks of fields of power, then science, technology, texts, and the contents of activities disappear. When the third speaks of truth effects, then to believe in the real existence of brain neurons or power plays would betray enormous naiveté. Each of these forms of criticism is powerful in itself but impossible to combine with the other. . . . Our intellectual life remains recognizable as long as epistemologists, sociologists, and deconstructionists remain at arm’s length, the critique of each group feeding on the weaknesses of the other two. (1993, 5-6)

    Latour then asks, “Is it our fault if the networks are simultaneously real, like nature, narrated, like discourse, and collective like society?” (6). He goes on to assert, “Analytic continuity has become impossible” (7). Similarly, Rotman’s history of the zero finds that the concept problematizes the hope that a “field of entities” exists prior to “the meta-sign which both initiates the signifying system and participates within it as a constituent sign”; he continues, “the simple picture of an independent reality of objects providing a pre-existing field of referents for signs conceived after them . . . cannot be sustained” (1987, 27). Our own approach is heterogeneous; we use notions of fetish, re-presentation, and Gödelian metaphor to try and bypass the critical immunity conferred on digital technologies by the naturalistic mathematical claims that immunize it against critique.

    Whether we use Latour’s description of the mutually exclusive methods of talking about the world – naturalization, socialization, deconstruction – or if we use Rotman’s three starting points for the semiotic analysis of mathematical signs – referential, formal, and psychological – we can contextualize the claims of the Big Data fetishists so that the manifestations of Big Data thinking – policing practices, financial privilege, educational opportunity – are not misrepresented as only a mathematical/statistical question about assessing the results of supposedly neutral interventions, decisions, or judgments. If we are confined to those questions, we will only operate within the referential domains described by Rotman or the realm of naturalization described by Latour. The claims of an a-contextual validity violate the consequence of their contextual status by claiming that operations, uses, and conclusions are exempt from the aggregated array of partial theorizations, applied, in this case, to mathematics. This historical/critical application reveals the contradictory world concealed and perpetuated by the corporatized mathematics of contemporary digital culture. However, deploying a constellation of critical methods – historical, semiotic, psychological – prevents the critique from falling prey to the totalism that afflicts the thinking of these New Pythagoreans. This array includes concepts such as fetishization from the pre-digital world of psychoanalysis.

    The concept of the fetish has fallen on hard times as the star of psychoanalysis sinks into the West’s neurochemical sea. But its original formulation remains useful because it seeks to address the gap between representational formulas and their objects. For example – drawing on the quintessential heterosexual, male figure who is central to psychoanalysis – the male shoe fetishist makes no distinction between a pair of Louboutins and the “normal” object of his sexual desire. Fenichel asserts (1945, 343) that such fetishization is “an attempt to deny a truth known simultaneously by another part of the personality,” and enables the use of denial. Such explanations may seem quaint, but that is not the point. The point is that within one of the most powerful metanarratives of the past century – psychoanalysis – scientists faced the contorted and defective nature of human symbolic behavior in its approach to a putative “real.” The fetish offers an illusory real that protects the fetishist against the complexities of the real. Similarly, the New Pythagoreans of Big Data offer an illusory real – a misconstrued mathematics – that often paralyzes resistance to their profit-driven, totalistic claims. In both cases, the fetish becomes the “real” while simultaneously protecting the fetishist from contact with whatever might be more human and more complex.

    Wired Magazine’s “daily fetish” seems an ironic reversal of the term’s functional meaning. Its steady stream of technological gadgets has an absent referent, a hyperreal as Baudrillard styles it, that is exactly the opposite of the material “real” that psychoanalysis sees as the motivation of the fetish. In lived life, the anxiety is provoked by the real; in digital fetishism, the anxiety is provoked by the absence of the real. The anxiety of absence provokes the frenzied production of digital fetishes. Their inevitable failure – because representation always fails – drives the proliferation of new, replacement fetishes, and these become a networked constellation that forms a sort of simulacrum: a model of an absence that the model paradoxically attempts to fill. Each failure accentuates the gap, thereby accentuating the drive toward yet another digital embodiment of the missing part. Industry newsletters exemplify the frantic repetition required by this worldview. For example, Edsurge proudly reports an endless stream of digital edtech products, each substituting for the awkward, fleshly messiness of learning. And each substitution claims to validate itself via mathematical claims of representation. And almost all fade away as the next technology takes its place. Endless succession.

    This profusion of products clamoring to be the “real” object suggests a sort of cultural castration anxiety, a term that might prove less outmoded if we note the preponderance of males in the field who busily give birth to objects with the characteristics of the living beings they seek to replace.[12] The absence at the core of this process is the unbridgeable gap between word and world. Mathematics is especially useful to such strategies because it is embedded in the culture as both the discoverer and validator of objective true/false judgments. These statements are understood to demonstrate a reality that “exists prior to the mathematical act of investigating it” (Rotman 2000, 6). It provides the certainty, the “real” that the digital fetish simultaneously craves and fears. Mathematics short-circuits the problematic question that drives the anxiety about a knowable “real.” The point here is not to revive psychoanalytic thinking, but rather to see how an anxiety mutates and invites the application of critical traditions that themselves embody a response to the incompleteness and inconsistency of sign systems. The psychological model expands into the destabilized social world of digital culture.

    The notion of mathematics as a complete and consistent equivalent of the real is a longstanding feature of Western thought. It both creates and is created by the human need for a knowable real. Mathematics reassures the culture because its formal characteristics seem to operate without referents in the real world, and thus its language seems to become more real than any iteration of its formal processes. However, within mathematical history, the story is more convoluted, in part because of the immense practical value of applied mathematics. While semiotic approaches to the history engage and describe the social construction of mathematics, an important question remains about the completeness and consistency of mathematical systems. The history of this concern connects both the technical question and the popular interest in the power of languages – natural and/or mathematical – to represent the real. Again, these are not just technical, expert questions; they leak into popular metaphor because they embody a larger cultural anxiety about a knowable real. If Pythagorean notions have affected the culture for 2500 years, we want to claim that contemporary culture embodies the anxiety of uncertainty that is revealed not only in its mathematics, but also in the contemporary arguments about algorithmic bias, completeness, and consistency.

    The nostalgia for a fully re-presentational sign system becomes paired with the digital technologies – software, hardware, networks, query strategies, algorithms, black boxes – that characterize daily life. However, this nostalgic rhetoric has a naïveté that embodies the craving for a stable and knowable external world. The culture often responds to it through objects inscribed with the certainty imputed to mathematics, and thus these digital technologies are felt to satisfy a deeply felt need. The problematic nature of mathematics matters little in terms of personalized shopping choices or customizing the ideal playlist. Although these systems rarely achieve the goal of “knowing what you want before you want it,” we rarely balk at the claim because the stakes are so low. However, where these claims have life-altering, and in some cases life and death implications – education, policing, health care, credit, safety net benefits, parole, drone targets – we need to understand them so they can be challenged, and where needed, resisted. Resistance addresses two issues:

    1. That the traditional mystery and power of number seem to justify the refusal of transparency. The mystified tools point upward to the supposed mysterium of the mathematical realm.
    2. That the genuflection before the mathematical mysterium has an insatiable hunger for illustrations that show the world is orderly and knowable.

    Together, these two positions combine to assert the mythological status of mathematics, and set it in opposition to critique. However, it is vulnerable on several fronts. As Pasquale makes clear, legislation – language in action – can begin the demystification; proprietary claims are mundane imitations of the old Pythagorean illusions; outside of political pressure and legislation, there is little incentive for companies to open their algorithms to auditing. However, once pried open by legislation, the wizard behind the curtain and the Automated Turk show their hand. With transparency comes another opportunity: demythologizing technologies that fetishize the re-presentational nature of mathematics.

    _____

    Chris Gilliard’s scholarship concentrates on privacy, institutional tech policy, digital redlining, and the re-inventions of discriminatory practices through data mining and algorithmic decision-making, especially as these apply to college students.

    Hugh Culik teaches at Macomb Community College. His work examines the convergence of systematic languages (mathematics and neurology) in Samuel Beckett’s fiction.

    Back to the essay

    _____

    Notes

    [1] Rotman’s work along with Amir Alexander’s cultural history of the calculus (2014) and Rebecca Goldstein’s (2006) placement of Gödel’s theorems in the historical context of mathematics’ conceptual struggle with the consistency and completeness of systems exemplify the movement to historicize mathematics. Alexander and Rotman are mathematicians, and Goldstein is a logician.

    [2] Other mathematical concepts have hypericonic status. For example, triangulation serves psychology as a metaphor for a family structure that pits two members against a third. Politicians “triangulate” their “position” relative to competing viewpoints. But because triangulation works in only two dimensions, it produces gross oversimplifications in other contexts. Nora Culik (pers. comm.) notes that a better metaphor would be multilateration, a measurement of the time difference between the arrival of a signal with at least two known points and another one that is unknown, to generate possible locations; these take the shape of a hyperboloid, a metaphor that allows for uncertainty in understanding multiply determined concepts. Both re-present an object’s position, but each carries implicit ideas of space.

    [3] Faith in the representational power of mathematics is central to hedge funds. Bridgewater Associates, a fund that manages more than $150 billion US, is at work building a piece of software to automate the staffing for strategic planning. The software seeks to model the cognitive structure of founder Raymond Dalio, and is meant to perpetuate his mind beyond his death. Dalio variously refers to the project as “The Book of the Future,” “The One Thing,” and “The Principles Operating System.” The project has drawn the enthusiastic attention of many popular publications such as The Wall Street Journal, Forbes, Wired, Bloomberg, and Fortune. The project’s model seems to operate on two levels: first, as a representation of Dalio’s mind, and second a representation of the dynamics of investing.

    [4] Numbers are divided into categories that grow in complexity. The development of numbers is an index to the development of the field (Kline, Mathematical Thought, 1972). For a careful study of the problematic status of zero, see Brian Rotman, Signifying Nothing: The Semiotics of Zero (1987). Amir Aczel, Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers (2015) offers a narrative of the historical origins of number.

    [5] Eugene Wigner (1959) asserts an ambiguous claim for a mathematizable universe. Responses include Max Tegmark’s “The Mathematical Universe” (2008) which sees the question as imbricated in a variety of computational, mathematical, and physical systems.

    [6] The anxiety of representation characterizes the shift from the literary moderns to the postmodern. For example, Samuel Beckett’s intense interest in mathematics and his strategies – literalization and cancellation – typify the literary responses to this anxiety. In his first published novel, Murphy (1938), one character mentions “Hypasos the Akousmatic, drowned in a mud puddle . . . for having divulged the incommensurability of side and diagonal” (46). Beckett uses detailed references to Descartes, Geulcinx, Gödel, and 17th Century mathematicians such as John Craig to literalize the representational limits of formal systems of knowledge. Andrew Gibson’s Beckett and Badiou provides a nuanced assessment of the mathematics, literature, and culture (2006) in Beckett’s work.

    [7] See Frank Kermode, The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue (2000) for an overview of the apocalyptic tradition in Western culture and the totalistic responses it evokes in politics. While mathematics dealt with indeterminacy, incompleteness, inconsistency and failure, the political world simultaneously saw a countervailing regressive collapse: Mein Kampf in 1925, the Soviet Gulag in 1934; Hitler’s election as Chancellor of Germany in 1933; the fascist bent of Ezra Pound, T. S. Eliot’s After Strange Gods, and D. H. Lawrence’s Mexican fantasies suggest the anxiety of re-presentation that gripped the culture.

    [8] Davis and Hersh (21) divide probability theory into three aspects: 1) theory, which has the same status as any other branch of mathematics; 2) applied theory that is connected to experimentation’s descriptive goals; and 3) applied probability for practical decisions and actions.

    [9] For primary documents, see Jean Van Heijenoort, From Frege to Gödel: a Source Book in Mathematical Logic, 1879-1931 (1967). Ernest Nagel and James Newman, Gödel’s Proof (1958) explains the steps of Gödel’s proofs and carefully restricts their metaphoric meanings; Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid [A Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll] (1980) places the work in the conceptual history that now leads to the possibility of artificial intelligence.

    [10] See Richard Nash, John Craige’s Mathematical Principles of Christian Theology. (1991) for a discussion of the 17th Century mathematician and theologian who attempted to calculate the rate of decline of faith in the Gospels so that he would know the date of the Apocalypse. His contributions to calculus and statistics emerge in a context we find absurd, even if his friend, Isaac Newton, found them valuable.

    [11] An equally foundational problem – the mathematics of infinity – occupies a similar position to the questions addressed by Gödel. Cantor’s opening of set theory exposes and solves the problems it poses to formal mathematics.

    [12] For the historical appearances of the masculine version of this anxiety, see Dennis Todd’s Imagining Monsters: Miscreations of the Self in Eighteenth Century England (1995).

    _____

    Works Cited

    • Aczel, Amir. 2015. Finding Zero: A Mathematician’s Odyssey to Uncover the Origins of Numbers. New York: St. Martin’s Griffin.
    • Alexander, Amir. 2014. Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World. New York: Macmillan.
    • Anderson, Chris. 2008. “The End of Theory.” Wired Magazine 16, no. 7: 16-07.
    • Beckett, Samuel. 1957. Murphy (1938). New York: Grove.
    • Berk, Richard. 2011. “Q&A with Richard Berk.” Interview by Greg Johnson. PennCurrent (Dec 15).
    • Blanchette, Patricia. 2014. “The Frege-Hilbert Controversy.” In Edward N. Zalta, ed., The Stanford Encyclopedia of Philosophy.
    • boyd, danah, and Crawford, Kate. 2012. “Critical Questions for Big Data.” Information, Communication & Society 15:5. doi 10.1080/1369118X.2012.678878.
    • de Certeau, Michel. 1988. The Practice of Everyday Life. Translated by Steven Rendall. Berkeley: University of California Press.
    • Davis, Philip and Reuben Hersh. 1981. Descartes’ Dream: The World According to Mathematics. Boston: Houghton Mifflin.
    • Desrosières, Alain, and Camille Naish. 2002. The Politics of Large Numbers: A History of Statistical Reasoning. Cambridge: Harvard University Press.
    • Eagle, Christopher. 2007. “‘Thou Serpent That Name Best’: On Adamic Language and Obscurity in Paradise Lost.” Milton Quarterly 41:3. 183-194.
    • Fenichel, Otto. 1945. The Psychoanalytic Theory of Neurosis. New York: W. W. Norton & Company.
    • Gibson, Andrew. 2006. Beckett and Badiou: The Pathos of Intermittency. New York: Oxford University Press.
    • Goldstein, Rebecca. 2006. Incompleteness: The Proof and Paradox of Kurt Gödel. New York: W.W. Norton & Company.
    • Guthrie, William Keith Chambers. 1962. A History of Greek Philosophy: Vol.1 The Earlier Presocratics and the Pythagoreans. Cambridge: Cambridge University Press.
    • Hofstadter, Douglas. 1979. Gödel, Escher, Bach: An Eternal Golden Braid; [a Metaphoric Fugue on Minds and Machines in the Spirit of Lewis Carroll]. New York: Basic Books.
    • Kermode, Frank. 2000. The Sense of an Ending: Studies in the Theory of Fiction with a New Epilogue. New York: Oxford University Press.
    • Kline, Morris. 1990. Mathematics: The Loss of Certainty. New York: Oxford University Press.
    • Latour, Bruno. 1993. We Have Never Been Modern. Translated by Catherine Porter. Cambridge: Harvard University Press.
    • Mitchell, W. J. T. 1995. Picture Theory: Essays on Verbal and Visual Representation. Chicago: University of Chicago Press.
    • Nagel, Ernest and James Newman. 1958. Gödel’s Proof. New York: New York University Press.
    • Office of Educational Technology at the US Department of Education. “Jose Ferreria: Knewton – Education Datapalooza”. Filmed [November 2012]. YouTube video, 9:47. Posted [November 2012]. https://youtube.com/watch?v=Lr7Z7ysDluQ.
    • O’Neil, Cathy. 2016. Weapons of Math Destruction. New York: Crown.
    • Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press.
    • Pigliucci, Massimo. 2009. “The End of Theory in Science?”. EMBO Reports 10, no. 6.
    • Porter, Theodore. 1986. The Rise of Statistical Thinking, 1820-1900. Princeton: Princeton University Press.
    • Rotman, Brian. 1987. Signifying Nothing: The Semiotics of Zero. Stanford: Stanford University Press
    • Rotman, Brian. 2000. Mathematics as Sign: Writing, Imagining, Counting. Stanford: Stanford University Press.
    • Tegmark, Max. 2008. “The Mathematical Universe.” Foundations of Physics 38 no. 2: 101-150.
    • Todd, Dennis. 1995. Imagining Monsters: Miscreations of the Self in Eighteenth Century England. Chicago: University of Chicago Press.
    • Turchin, Valentin. 1977. The Phenomenon of Science. New York: Columbia University Press.
    • Van Heijenoort, Jean. 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Vol. 9. Cambridge: Harvard University Press.
    • Wigner, Eugene P. 1959. “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Richard Courant Lecture in Mathematical Sciences delivered at New York University, May 11. Reprinted in Communications on Pure and Applied Mathematics 13:1 (1960). 1-14.
    • Wu, Xiaolin, and Xi Zhang. 2016. “Automated Inference on Criminality using Face Images.” arXiv preprint: 1611.04135.
  • Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    a review of Bernard Harcourt, Exposed: Desire and Disobedience in the Digital Age (Harvard, 2015)

    by Richard Hill

    ~

    This is a seminal and important book, which should be studied carefully by anyone interested in the evolution of society in light of the pervasive impact of the Internet. In a nutshell, the book documents how and why the Internet turned from a means to improve our lives into what appears to be a frightening dystopia driven by the collection and exploitation of personal data, data that most of us willingly hand over with little or no care for the consequences. “In our digital frenzy to share snapshots and updates, to text and videochat with friends and lovers … we are exposing ourselves‒rendering ourselves virtually transparent to anyone with rudimentary technological capabilities” (page 13 of the hardcover edition).

    The book meets its goals (25) of tracing the emergence of a new architecture of power relations; to document its effects on our lives; and to explore how to resist and disobey (but this last rather succinctly). As the author correctly says (28), metaphors matter, and we need to re-examine them closely, in particular the so-called free flow of data.

    As the author cogently points out, quoting Media Studies scholar Siva Vaidhyanathan, we “assumed digitization would level the commercial playing field in wealthy economies and invite new competition into markets that had always had high barriers to entry.” We “imagined a rapid spread of education and critical thinking once we surmounted the millennium-old problems of information scarcity and maldistribution” (169).

    “But the digital realm does not so much give us access to truth as it constitutes a new way for power to circulate throughout society” (22). “In our digital age, social media companies engage in surveillance, data brokers sell personal information, tech companies govern our expression of political views, and intelligence agencies free-ride off e-commerce. … corporations and governments [are enabled] to identify and cajole, to stimulate our consumption and shape our desires, to manipulate us politically, to watch, surveil, detect, predict, and, for some, punish. In the process, the traditional limits placed on the state and on governing are being eviscerated, as we turn more and more into marketized malleable subjects who, willingly or unwillingly, allow ourselves to be nudged, recommended, tracked, diagnosed, and predicted by a blurred amalgam of governmental and commercial initiative” (187).

    “The collapse of the classic divide between the state and society, between the public and private sphere, is particular debilitating and disarming. The reason is that the boundaries of the state had always been imagined in order to limit them” (208). “What is emerging in the place of separate spheres [of government and private industry] is a single behemoth of a data market: a colossal market for personal data” (198). “Knots of statelike power: that is what we face. A tenticular amalgam of public and private institutions … Economy, society, and private life melt into a giant data market for everyone to trade, mine, analyze, and target” (215). “This is all the more troubling because the combinations we face today are so powerful” (210).

    As a consequence, “Digital exposure is restructuring the self … The new digital age … is having profound effects on our analogue selves. … it is radically transforming our subjectivity‒even for those, perhaps even more, who believe they have nothing to fear” (232). “Mortification of the self, in our digital world, happens when subjects voluntarily cede their private attachments and their personal privacy, when they give up their protected personal space, cease monitoring their exposure on the Internet, let go of their personal data, and expose their intimate lives” (233).

    As the book points out, quoting Software Freedom Law Center founder Eben Moglen, it is justifiable to ask whether “any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the United States government has led not only its people but the world” (254). “This is a different form of despotism, one that might take hold only in a democracy: one in which people loose the will to resist and surrender with broken spirit” (255).

    The book opens with an unnumbered chapter that masterfully reminds us of the digital society we live in: a world in which both private companies and government intelligence services (also known as spies) read our e-mails and monitor our web browsing. Just think of “the telltale advertisements popping up on the ribbon of our search screen, reminding us of immediately past Google or Bing queries. We’ve received the betraying e-mails in our spam folders” (2). As the book says, quoting journalist Yasha Levine, social media has become “a massive surveillance operation that intercepts and analyses terabytes of data to build and update complex psychological profiles on hundreds of millions of people all over the world‒all of it in real time” (7). “At practically no cost, the government has complete access to people’s digital selves” (10).

    We provide all this data willingly (13), because we have no choice and/or because we “wish to share our lives with loved ones and friends” (14). We crave digital connections and recognition and “Our digital cravings are matched only by the drive and ambition of those who are watching” (14). “Today, the drive to know everything, everywhere, at every moment is breathtaking” (15).

    But “there remain a number of us who continue to resist. And there are many more who are ambivalent about the loss of privacy or anonymity, who are deeply concerned or hesitant. There are some who anxiously warn us about the dangers and encourage us to maintain reserve” (13).

    “And yet, even when we hesitate or are ambivalent, it seems there is simply no other way to get things done in the new digital age” (14), be it airline tickets, hotel reservations, buying goods, booking entertainment. “We make ourselves virtually transparent for everyone to see, and in so doing, we allow ourselves to be shaped in unprecedented ways, intentionally or wittingly … we are transformed and shaped into digital subjects” (14). “It’s not so much a question of choice as a feeling of necessity” (19). “For adolescents and young adults especially, it is practically impossible to have a social life, to have friends, to meet up, to go on dates, unless we are negotiating the various forms of social media and mobile technology” (18).

    Most have become dulled by blind faith in markets, the neoliberal mantra (better to let private companies run things than the government), fear of terrorism‒dulled into believing that, if we have nothing to hide, then there is nothing to fear (19). Even though private companies, and governments, know far more about us than a totalitarian regime such as that of East Germany “could ever have dreamed” (20).

    “We face today, in advanced liberal democracies, a radical new form of power in a completely altered landscape of political and social possibilities” (17). “Those who govern, advertise, and police are dealing with a primary resource‒personal data‒that is being handed out for free, given away in abundance, for nothing” (18).

    According to the book “There is no conspiracy here, nothing untoward.” But the author probably did not have access to Shawn M. Powers and Michael Jablonski’s The Real Cyberwar: The Political Economy of Internet Freedom (2015), published around the same time as Harcourt’s book, which shows that actually the current situation was created, or at least facilitated, by deliberate actions of the US government (which were open, not secret), resulting in what the book calls, quoting journalist James Bamford, “a surveillance-industrial empire” (27).

    The observations and conclusions outlined above are meticulously justified, with numerous references, in the numbered chapters of the book. Chapter 1 explains how analogies of the current surveillance regime to Orwell’s 1984 are imperfect because, unlike in Orwell’s imagined world, today most people desire to provide their personal data and do so voluntarily (35). “That is primarily how surveillance works today in liberal democracies: through the simplest desires, curated and recommended to us” (47).

    Chapter 2 explains how the current regime is not really a surveillance state in the classical sense of the term: it is a surveillance society because it is based on the collaboration of government, the private sector, and people themselves (65, 78-79). Some believe that government surveillance can prevent or reduce terrorist attacks (55-56), never mind that it might violate constitutional rights (56-57), or be ineffective, or that terrorist attacks in liberal democracies have resulted in far fewer fatalities than, say, traffic accidents or opiod overdose.

    Chapter 3 explains how the current regime is not actually an instantiation of Jeremy Bentham’s Panopticon, because we are not surveilled in order to be punished‒on the contrary, we expose ourselves in order to obtain something we want (90), and we don’t necessarily realize the extent to which we are being surveilled (91). As the book puts it, Google strives “to help people get what they want” by collecting and processing as much personal data as possible (103).

    Chapter 4 explains how narcissism drives the willing exposure of personal data (111). “We take pleasure in watching [our friends], ‘following’ them, ‘sharing’ their information‒even while we are, unwittingly, sharing our every keyboard stroke” (114). “We love watching others and stalking their digital traces” (117).

    Yet opacity is the rule for corporations‒as the book says, quoting Frank Pasquale (124-125), “Internet companies collect more and more data on their users but fight regulations that would let those same users exercise some control over the resulting digital dossiers.” In this context, it is worth noting the recent proposals, analyzed here, here, and here, to the World Trade Organization that would go in the direction favored by dominant corporations.

    The book explains in summary fashion the importance of big data (137-140). For an additional discussion, with extensive references, see sections 1 of my submission to the Working Group on Enhanced Cooperation. As the book correctly notes, “In the nineteenth century, it was the government that generated data … But now we have all become our own publicists. The production of data has become democratized” (140).

    Chapter 5 explains how big data, and its analysis, is fundamentally different from the statistics that were collected, analyzed, and published in the past by governments. The goal of statistics is to understand and possibly predict the behavior of some group of people who share some characteristics (e.g. they live in a particular geographical area, or are of the same age). The goal of big data is to target and predict individuals (158, 161-163).

    Chapter 6 explains how we have come to accept the loss of privacy and control of our personal data (166-167). A change in outlook, largely driven by an exaggerated faith in free enterprise (168 and 176), “has made it easier to commodify privacy, and, gradually, to eviscerate it” (170). “Privacy has become a form of private property” (176).

    The book documents well the changes in the US Supreme Court’s views of privacy, which have moved from defending a human right to balancing privacy with national security and commercial interests (172-175). Curiously, the book does not mention the watershed Smith vs. Maryland case, in which the US Supreme Court held that telephone metadata is not protected by the right to privacy, nor the US Electronic Communications Privacy Act, under which many e-mails are not protected either.

    The book mentions the incestuous ties between the intelligence community, telecommunications companies, multinational companies, and military leadership that have facilitated the implementation of the current surveillance regime (178); these ties are exposed and explained in greater detail in Powers and Jablonski’s The Real Cyberwar. This chapter ends with an excellent explanation of how digital surveillance records are in no way comparable to the old-fashioned paper files that were collected in the past (181).

    Chapter 7 explores the emerging dystopia, engendered by the fact that “The digital economy has torn down the conventional boundaries between governing, commerce, and private life” (187). In a trend that should be frightening, private companies now exercise censorship (191), practice data mining on scales that are hard to imagine (194), control worker performance by means beyond the dreams of any Tayorlist (196), and even aspire to “predict consumer preferences better than consumers themselves can” (198).

    The size of the data brokerage market is huge and data on individuals is increasingly used to make decision about them, e.g. whether they can obtain a loan (198-208). “Practically none of these scores [calculated from personal data] are revealed to us, and their accuracy is often haphazard” (205). As noted above, we face an interdependent web of private and public interests that collect, analyze, refine, and exploit our personal data‒without any meaningful supervision or regulation.

    Chapter 8 explains how digital interactions are reconfiguring our self-images, our subjectivity. We know, albeit at times only implicitly, that we are being surveilled and this likely affects the behavior of many (218). Being deprived of privacy affects us, much as would being deprived of property (229). We have voluntarily given up much of our privacy, believing either that we have no choice but to accept surveillance, or that the surveillance is in our interests (233). So it is our society as a whole that has created, and nurtures, the surveillance regime that we live in.

    As shown in Chapter 9, that regime is a form of digital incarceration. We are surveilled even more closely than are people obliged by court order to wear electronic tracking devices (237). Perhaps a future smart watch will even administer sedatives (or whatever) when it detects, by analyzing our body functions and comparing with profiles downloaded from the cloud, that we would be better off being sedated (237). Or perhaps such a watch will be hijacked by malware controlled by an intelligence service or by criminals, thus turning a seemingly free choice into involuntary constraints (243, 247).

    Chapter 10 show in detail how, as already noted, the current surveillance regime is not compatible with democracy. The book cites Tocqueville to remind us that democracy can become despotic, and result is a situation where “people lose the will to resist and surrender with broken spirit” (255). The book summarily presents well-known data regarding the low voter turnouts in the United States, a topic covered in full detail in Robert McChesney’s  Digital Disconnect: How Capitalism is Turning the Internet Against Democracy (2014) which explains how the Internet is having a negative effect on democracy. Yet “it remains the case that the digital transparency and punishment issues are largely invisible to democratic theory and practice” (216).

    So, what is to be done? Chapter 11 extols the revelations made by Edward Snowden and those published by Julian Assange (WikiLeaks). It mentions various useful self-help tools, such as “I Fight Surveillance” and “Security in a Box” (270-271). While those tools are useful, they are not at present used pervasively and thus don’t really affect the current surveillance regime. We need more emphasis on making the tools available and on convincing more people to use them.

    As the book correctly says, an effective measure would be to carry the privatization model to its logical extreme (274): since personal data is valuable, those who use it should pay us for it. As already noted, the industry that is thriving from the exploitation of our personal data is well aware of this potential threat, and has worked hard to attempt to obtain binding international norms, in the World Trade Organization, that would enshrine the “free flow of data”, where “free” in the sense of freedom of information is used as a Trojan Horse for the real objective, which is “free” in the sense of no cost and no compensation for those the true owners of the data, we the people. As the book correctly mentions, civil society organizations have resisted this trend and made proposals that go in the opposite direction (276), including a proposal to enshrine the necessary and proportionate principles in international law.

    Chapter 12 concludes the book by pointing out, albeit very succinctly, that mass resistance is necessary, and that it need not be organized in traditional ways: it can be leaderless, diffuse, and pervasive (281). In this context, I refer to the work of the JustNet Coalition and of the fledgling Internet Social Forum (see also here and here).

    Again, this book is essential reading for anybody who is concerned about the current state of the digital world, and the direction in which it is moving.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • The Human Condition and The Black Box Society

    The Human Condition and The Black Box Society

    Frank Pasquale, The Black Box Society (Harvard University Press, 2015)a review of Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
    by Nicole Dewandre
    ~

    1. Introduction

    This review is informed by its author’s specific standpoint: first, a lifelong experience in a policy-making environment, i.e. the European Commission; and, second, a passion for the work of Hannah Arendt and the conviction that she has a great deal to offer to politics and policy-making in this emerging hyperconnected era. As advisor for societal issues at DG Connect, the department of the European Commission in charge of ICT policy at EU level, I have had the privilege of convening the Onlife Initiative, which explored the consequences of the changes brought about by the deployment of ICTs on the public space and on the expectations toward policy-making. This collective thought exercise, which took place in 2012-2013, was strongly inspired by Hannah Arendt’s 1958 book The Human Condition.

    This is the background against which I read the The Black Box Society: The Secret Algorithms Behind Money and Information by Frank Pasquale (references to which are indicated here parenthetically by page number). Two of the meanings of “black box“—a device that keeps track of everything during a flight, on the one hand, and the node of a system that prevents an observer from identifying the link(s) between input and output, on the other hand—serve as apt metaphors for today’s emerging Big Data environment.

    Pasquale digs deep into three sectors that are at the root of what he calls the black box society: reputation (how we are rated and ranked), search (how we use ratings and rankings to organize the world), and finance (money and its derivatives, whose flows depend crucially on forms of reputation and search). Algorithms and Big Data have permeated these three activities to a point where disconnection with human judgment or control can transmogrify them into blind zombies, opening new risks, affordances and opportunities. We are far from the ideal representation of algorithms as support for decision-making. In these three areas, decision-making has been taken over by algorithms, and there is no “invisible hand” ensuring that profit-driven corporate strategies will deliver fairness or improve the quality of life.

    The EU and the US contexts are both distinct and similar. In this review, I shall not comment on Pasquale’s specific policy recommendations in detail, even if as European, I appreciate the numerous references to European law and policy that Pasquale commends as good practices (ranging from digital competition law, to welfare state provision, to privacy policies). I shall instead comment from a meta-perspective, that of challenging the worldview that implicitly undergirds policy-making on both sides of the Atlantic.

    2. A Meta-perspective on The Black Box Society

    The meta-perspective as I see it is itself twofold: (i) we are stuck with Modern referential frameworks, which hinder our ability to attend to changing human needs, desires and expectations in this emerging hyperconnected era, and (ii) the personification of corporations in policymaking reveals shortcomings in the current representation of agents as interest-led beings.

    a) Game over for Modernity!

    As stated by the Onlife Initiative in its “Onlife Manifesto,” through its expression “Game over for Modernity?“, it is time for politics and policy-making to leave Modernity behind. That does not mean going back to the Middle Ages, as feared by some, but instead stepping firmly into this new era that is coming to us. I believe with Genevieve Bell and Paul Dourish that it is more effective to consider that we are now entering into the ubiquitous computing era instead of looking at it as if it was approaching fast.[1] With the miniaturisation of devices and sensors, with mobile access to broadband internet and with the generalized connectivity of objects as well as of people, not only do we witness an increase of the online world, but, more fundamentally, a collapse of the distinction between the online and the offline worlds, and therefore a radically new socio-technico-natural compound. We live in an environment which is increasingly reactive and talkative as a result of the intricate mix between off-line and online universes. Human interactions are also deeply affected by this new socio-technico-natural compound, as they are or will soon be “sticky”, i.e. leave a material trace by default and this for the first time in history. These new affordances and constraints destabilize profoundly our Modern conceptual frameworks, which rely on distinctions that are blurring, such as the one between the real and the virtual or the ones between humans, artefacts and nature, understood with mental categories dating back from the Enlightenment and before. The very expression “post-Modern” is not accurate anymore or is too shy, as it continues to position Modernity as its reference point. It is time to give a proper name to this new era we are stepping into, and hyperconnectivity may be such a name.

    Policy-making however continues to rely heavily on Modern conceptual frameworks, and this not only from the policy-makers’ point of view but more widely from all those engaging in the public debate. There are many structuring features of the Modern conceptual frameworks and it goes certainly beyond this review to address them thoroughly. However, when it comes to addressing the challenges described by The Black Box Society, it is important to mention the epistemological stance that has been spelled out brilliantly by Susan H. Williams in her Truth, Autonomy, and Speech: Feminist Theory and the First Amendment: “the connection forged in Cartesianism between knowledge and power”[2]. Before encountering Susan Williams’s work, I came to refer to this stance less elegantly with the expression “omniscience-omnipotence utopia”[3]. Williams writes that “this epistemological stance has come to be so widely accepted and so much a part of many of our social institutions that it is almost invisible to us” and that “as a result, lawyers and judges operate largely unself-consciously with this epistemology”[4]. To Williams’s “lawyers and judges”, we should add policy-makers and stakeholders.  This Cartesian epistemological stance grounds the conviction that the world can be elucidated in causal terms, that knowledge is about prediction and control, and that there is no limit to what men can achieve provided they have the will and the knowledge. In this Modern worldview, men are considered as rational subjects and their freedom is synonymous with control and autonomy. The fact that we have a limited lifetime and attention span is out of the picture as is the human’s inherent relationality. Issues are framed as if transparency and control is all that men need to make their own way.

    1) One-Way Mirror or Social Hypergravity?

    Frank Pasquale is well aware of and has contributed to the emerging critique of transparency and he states clearly that “transparency is not just an end in itself” (8). However, there are traces of the Modern reliance on transparency as regulative ideal in the Black Box Society. One of them is when he mobilizes the one-way mirror metaphor. He writes:

    We do not live in a peaceable kingdom of private walled gardens; the contemporary world more closely resembles a one-way mirror. Important corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make. (9)

    I refrain from considering the Big Data environment as an environment that “makes sense” on its own, provided someone has access to as much data as possible. In other words, the algorithms crawling the data can hardly be compared to a “super-spy” providing the data controller with an absolute knowledge.

    Another shortcoming of the one-way mirror metaphor is that the implicit corrective is a transparent pane of glass, so the watched can watch the watchers. This reliance on transparency is misleading. I prefer another metaphor that fits better, in my view: to characterise the Big Data environment in a hyperconnected conceptual framework. As alluded to earlier, in contradistinction to the previous centuries and even millennia, human interactions will, by default, be “sticky”, i.e. leave a trace. Evanescence of interactions, which used to be the default for millennia, will instead require active measures to be ensured. So, my metaphor for capturing the radicality and the scope of this change is a change of “social atmosphere” or “social gravity”, as it were. For centuries, we have slowly developed social skills, behaviors and regulations, i.e. a whole ecosystem, to strike a balance between accountability and freedom, in a world where “verba volant and scripta manent[5], i.e. where human interactions took place in an “atmosphere” with a 1g “social gravity”, where they were evanescent by default and where action had to be taken to register them. Now, with all interactions leaving a trace by default, and each of us going around with his, her or its digital shadow, we are drifting fast towards an era where the “social atmosphere” will be of heavier gravity, say “10g”. The challenge is huge and will require a lot of collective learning and adaptation to develop the literacy and regulatory frameworks that will recreate and sustain the balance between accountability and freedom for all agents, human and corporations.

    The heaviness of this new data density stands in-between or is orthogonal to the two phantasms of bright emancipatory promises of Big Data, on the one hand, or frightening fears of Big Brother, on the other hand. Because of this social hypergravity, we, individually and collectively, have indeed to be cautious about the use of Big Data, as we have to be cautious when handling dangerous or unknown substances. This heavier atmosphere, as it were, opens to increased possibilities of hurting others, notably through harassment, bullying and false rumors. The advent of Big Data does not, by itself, provide a “license to fool” nor does it free agents from the need to behave and avoid harming others. Exploiting asymmetries and new affordances to fool or to hurt others is no more acceptable behavior as it was before the advent of Big Data. Hence, although from a different metaphorical standpoint, I support Pasquale’s recommendations to pay increased attention to the new ways the current and emergent practices relying on algorithms in reputation, search and finance may be harmful or misleading and deceptive.

    2) The Politics of Transparency or the Exhaustive Labor of Watchdogging?

    Another “leftover” of the Modern conceptual framework that surfaces in The Black Box Society is the reliance on watchdogging for ensuring proper behavior by corporate agents. Relying on watchdogging for ensuring proper behavior nurtures the idea that it is all right to behave badly, as long as one is not seen doing do. This reinforces the idea that the qualification of an act depends from it being unveiled or not, as if as long as it goes unnoticed, it is all right. This puts the entire burden on the watchers and no burden whatsoever on the doers. It positions a sort of symbolic face-to-face between supposed mindless firms, who are enabled to pursue their careless strategies as long as they are not put under the light and people who are expected to spend all their time, attention and energy raising indignation against wrong behaviors. Far from empowering the watchers, this framing enslaves them to waste time monitoring actors who should be acting in much better ways already. Indeed, if unacceptable behavior is unveiled, it raises outrage, but outrage is far from bringing a solution per se. If, instead, proper behaviors are witnessed, then the watchers are bound to praise the doers. In both cases, watchers are stuck in a passive, reactive and specular posture, while all the glory or the shame is on the side of the doers. I don’t deny the need to have watchers, but I warn against the temptation of relying excessively on the divide between doers and watchers to police behaviors, without engaging collectively in the formulation of what proper and inappropriate behaviors are. And there is no ready-made consensus about this, so that it requires informed exchange of views and hard collective work. As Pasquale explains in an interview where he defends interpretative approaches to social sciences against quantitative ones:

    Interpretive social scientists try to explain events as a text to be clarified, debated, argued about. They do not aspire to model our understanding of people on our understanding of atoms or molecules. The human sciences are not natural sciences. Critical moral questions can’t be settled via quantification, however refined “cost benefit analysis” and other political calculi become. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. That’s a feature, not a bug, of the method: rather than trying to bury normative differences in jargon, it surfaces them.

    The excessive reliance on watchdogging enslaves the citizenry to serve as mere “watchdogs” of corporations and government, and prevents any constructive cooperation with corporations and governments. It drains citizens’ energy for pursuing their own goals and making their own positive contributions to the world, notably by engaging in the collective work required to outline, nurture and maintain the shaping of what accounts for appropriate behaviours.

    As a matter of fact, watchdogging would be nothing more than an exhausting laboring activity.

    b) The Personification of Corporations

    One of the red threads unifying The Black Box Society’s treatment of numerous technical subjects is unveiling the oddness of the comparative postures and status of corporations, on the one hand, and people, on the other hand. As nicely put by Pasquale, “corporate secrecy expands as the privacy of human beings contracts” (26), and, in the meantime, the divide between government and business is narrowing (206). Pasquale points also to the fact that at least since 2001, people have been routinely scrutinized by public agencies to deter the threatening ones from hurting others, while the threats caused by corporate wrongdoings in 2008 gave rise to much less attention and effort to hold corporations to account. He also notes that “at present, corporations and government have united to focus on the citizenry. But why not set government (and its contractors) to work on corporate wrongdoings?” (183) It is my view that these oddnesses go along with what I would call a “sensitive inversion”. Corporations, which are functional beings, are granted sensitivity as if they were human beings, in policy-making imaginaries and narratives, while men and women, who are sensitive beings, are approached in policy-making as if they were functional beings, i.e. consumers, job-holders, investors, bearer of fundamental rights, but never personae per se. The granting of sensitivity to corporations goes beyond the legal aspect of their personhood. It entails that corporations are the one whose so-called needs are taken care of by policy makers, and those who are really addressed to, qua persona. Policies are designed with business needs in mind, to foster their competitiveness or their “fitness”. People are only indirect or secondary beneficiaries of these policies.

    The inversion of sensitivity might not be a problem per se, if it opened pragmatically to an effective way to design and implement policies which bear indeed positive effects for men and women in the end. But Pasquale provides ample evidence showing that this is not the case, at least in the three sectors he has looked at more closely, and certainly not in finance.

    Pasquale’s critique of the hypostatization of corporations and reduction of humans has many theoretical antecedents. Looking at it from the perspective of Hannah Arendt’s The Human Condition illuminates the shortcomings and risks associated with considering corporations as agents in the public space and understanding the consequences of granting them sensitivity, or as it were, human rights. Action is the activity that flows from the fact that men and women are plural and interact with each other: “the human condition of action is plurality”.[6] Plurality is itself a ternary concept made of equality, uniqueness and relationality. First, equality as what we grant to each other when entering into a political relationship. Second, uniqueness refers to the fact that what makes each human a human qua human is precisely that who s/he is is unique. If we treat other humans as interchangeable entities or as characterised by their attributes or qualities, i.e., as a what, we do not treat them as human qua human, but as objects. Last and by no means least, the third component of plurality is the relational and dynamic nature of identity. For Arendt, the disclosure of the who “can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities”[7]. The who appears unmistakably to others, but remains somewhat hidden from the self. It is this relational and revelatory character of identity that confers to speech and action such a critical role and that articulates action with identity and freedom. Indeed, for entities for which the who is partly out of reach and matters, appearance in front of others, notably with speech and action, is a necessary condition of revealing that identity:

    Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: who are you? In acting and speaking, men show who they are, they appear. Revelatory quality of speech and action comes to the fore where people are with others and neither for, nor against them, that is in sheer togetherness.[8]

    So, in this sense, the public space is the arena where whos appear to other whos, personae to other personae.

    For Arendt, the essence of politics is freedom and is grounded in action, not in labour and work. The public space is where agents coexist and experience their plurality, i.e. the fact that they are equal, unique and relational. So, it is much more than the usual American pluralist (i.e., early Dahl-ian) conception of a space where agents worry for exclusively for their own needs by bargaining aggressively. In Arendt’s perspective, the public space is where agents, self-aware of their plural characteristic, interact with each other once their basic needs have been taken care of in the private sphere. As highlighted by Seyla Benhabib in The Reluctant Modernism of Hannah Arendt, “we not only owe to Hannah Arendt’s political philosophy the recovery of the public as a central category for all democratic-liberal politics; we are also indebted to her for the insight that the public and the private are interdependent”.[9] One could not appear in public if s/he or it did not have also a private place, notably to attend to his, her or its basic needs for existence. In Arendtian terms, interactions in the public space take place between agents who are beyond their satiety threshold. Acknowledging satiety is a precondition for engaging with others in a way that is not driven by one’s own interest, but rather by their desire to act together with others—”in sheer togetherness”—and be acknowledged as who they are. If an agent perceives him-, her- or itself and behave only as a profit-maximiser or as an interest-led being, i.e. if s/he or it has no sense of satiety and no self-awareness of the relational and revelatory character of his, her or its identity, then s/he or it cannot be a “who” or an agent in political terms, and therefore, respond of him-, her- or itself. It does simply not deserve -and therefore should not be granted- the status of a persona in the public space.

    It is easy to imagine that there can indeed be no freedom below satiety, and that “sheer togetherness” would just be impossible among agents below their satiety level or deprived from having one. This is however the situation we are in, symbolically, when we grant corporations the status of persona while considering efficient and appropriate that they care only for profit-maximisation. For a business, making profit is a condition to stay alive, as for humans, eating is a condition to stay alive. However, in the name of the need to compete on global markets, to foster growth and to provide jobs, policy-makers embrace and legitimize an approach to businesses as profit-maximisers, despite the fact this is a reductionist caricature of what is allowed by the legal framework on company law[10]. So, the condition for businesses to deserve the status of persona in the public space is, no less than for men and women, to attend their whoness and honour their identity, by staying away from behaving according to their narrowly defined interests. It means also to care for the world as much, if not more, as for themselves.

    This resonates meaningfully with the quotation from Heraclitus that serves as the epigraph for The Black Box Society: “There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own”. Reading Arendt with Heraclitus’s categories of sleep and wakefulness, one might consider that totalitarianism arises—or is not far away—when human beings are awake in private, but asleep in public, in the sense that they silence their humanness or that their humanness is silenced by others when appearing in public. In this perspective, the merging of markets and politics—as highlighted by Pasquale—could be seen as a generalized sleep in the public space of human beings and corporations, qua personae, while all awakened activities are taking place in the private, exclusively driven by their needs and interests.

    In other words—some might find a book like The Black Box Society, which offers a bold reform agenda for numerous agencies, to be too idealistic. But in my view, it falls short of being idealistic enough: there is a missing normative core to the proposals in the book, which can be corrected by democratic, political, and particularly Arendtian theory. If a populace has no acceptance of a certain level of goods and services prevailing as satiating its needs, and if it distorts the revelatory character of identity into an endless pursuit of a limitless growth, it cannot have the proper lens and approach to formulate what it takes to enable the fairness and fair play described in The Black Box Society.

    3. Stepping into Hyperconnectivity

    1) Agents as Relational Selves

    A central feature of the Modern conceptual framework underlying policymaking is the figure of the rational subject as political proxy of humanness. I claim that this is not effective anymore in ensuring a fair and flourishing life for men and women in this emerging hyperconnected era and that we should adopt instead the figure of a “relational self” as it emerges from the Arendtian concept of plurality.

    The concept of the rational subject was forged to erect Man over nature. Nowadays, the problem is not so much to distinguish men from nature, but rather to distinguish men—and women—from artefacts. Robots come close to humans and even outperform them, if we continue to define humans as rational subjects. The figure of the rational subject is torn apart between “truncated gods”—when Reason is considered as what brings eventually an overall lucidity—on the one hand, and “smart artefacts”—when reason is nothing more than logical steps or algorithms—on the other hand. Men and women are neither “Deep Blue” nor mere automatons. In between these two phantasms, the humanness of men and women is smashed. This is indeed what happens in the Kafkaesque and ridiculous situations where a thoughtless and mindless approach to Big Data is implemented, and this from both stance, as workers and as consumers. As far as the working environment is concerned, “call centers are the ultimate embodiment of the panoptic workspace. There, workers are monitored all the time” (35). Indeed, this type of overtly monitored working environment is nothing else that a materialisation of the panopticon. As consumers, we all see what Pasquale means when he writes that “far more [of us] don’t even try to engage, given the demoralizing experience of interacting with cyborgish amalgams of drop- down menus, phone trees, and call center staff”. In fact, this mindless use of automation is only the last version of the way we have been thinking for the last decades, i.e. that progress means rationalisation and de-humanisation across the board. The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human. So, staying stuck with the rational subject as a proxy for humanness, either ends up in smashing our humanness as workers and consumers and, at best, leads to absurd situations where to be free would mean spending all our time controlling we are not controlled.

    As a result, keeping the rational subject as the central representation of humanness will increasingly be misleading politically speaking. It fails to provide a compass for treating each other fairly and making appropriate decisions and judgments, in order to impacting positively and meaningfully on human lives.

    With her concept of plurality, Arendt offers an alternative to the rational subject for defining humanness: that of the relational self. The relational self, as it emerges from the Arendtian’s concept of plurality[11], is the man, woman or agent self-aware of his, her or its plurality, i.e. the facts that (i) he, she or it is equal to his, her or its fellows; (ii) she, he or it is unique as all other fellows are unique; and (iii) his, her or its identity as a revelatory character requiring to appear among others in order to reveal itself through speech and action. This figure of the relational self accounts for what is essential to protect politically in our humanness in a hyperconnected era, i.e. that we are truly interdependent from the mutual recognition that we grant to each other and that our humanity is precisely grounded in that mutual recognition, much more than in any “objective” difference or criteria that would allow an expert system to sort out human from non-human entities.

    The relational self, as arising from Arendt’s plurality, combines relationality and freedom. It resonates deeply with the vision proposed by Susan H. Williams, i.e. the relational model of truth and the narrative model to autonomy, in order to overcome the shortcomings of the Cartesian and liberal approaches to truth and autonomy without throwing the baby, i.e. the notion of agency and responsibility, out with the bathwater, as the social constructionist and feminist critique of the conceptions of truth and autonomy may be understood of doing.[12]

    Adopting the relational self as the canonical figure of humanness instead of the rational subject‘s one puts under the light the direct relationship between the quality of interactions, on the one hand, and the quality of life, on the other hand. In contradistinction with transparency and control, which are meant to empower non-relational individuals, relational selves are self-aware that they are in need of respect and fair treatment from others, instead. It also makes room for vulnerability, notably the vulnerability of our attentional spheres, and saturation, i.e. the fact that we have a limited attention span, and are far from making a “free choice” when clicking on “I have read and accept the Terms & Conditions”. Instead of transparency and control as policy ends in themselves, the quality of life of relational selves and the robustness of the world they construct together and that lies between them depend critically on being treated fairly and not being fooled.

    It is interesting to note that the word “trust” blooms in policy documents, showing that the consciousness of the fact that we rely from each other is building up. Referring to trust as if it needed to be built is however a signature of the fact that we are in transition from Modernity to hyperconnectivity, and not yet fully arrived. By approaching trust as something that can be materialized we look at it with Modern eyes. As “consent is the universal solvent” (35) of control, transparency-and-control is the universal solvent of trust. Indeed, we know that transparency and control nurture suspicion and distrust. And that is precisely why they have been adopted as Modern regulatory ideals. Arendt writes: “After this deception [that we were fooled by our senses], suspicions began to haunt Modern man from all sides”[13]. So, indeed, Modern conceptual frameworks rely heavily on suspicion, as a sort of transposition in the realm of human affairs of the systematic doubt approach to scientific enquiries. Frank Pasquale quotes moral philosopher Iris Murdoch for having said: “Man is a creature who makes pictures of himself and then comes to resemble the picture” (89). If she is right—and I am afraid she is—it is of utmost importance to shift away from picturing ourselves as rational subjects and embrace instead the figure of relational selves, if only to save the fact that trust can remain a general baseline in human affairs. Indeed, if it came true that trust can only be the outcome of a generalized suspicion, then indeed we would be lost.

    Besides grounding the notion of relational self, the Arendtian concept of plurality allows accounting for interactions among humans and among other plural agents, which are beyond fulfilling their basic needs (necessity) or achieving goals (instrumentality), and leads to the revelation of their identities while giving rise to unpredictable outcomes. As such, plurality enriches the basket of representations for interactions in policy making. It brings, as it were, a post-Modern –or should I dare saying a hyperconnected- view to interactions. The Modern conceptual basket for representations of interactions includes, as its central piece, causality. In Modern terms, the notion of equilibrium is approached through a mutual neutralization of forces, either with the invisible hand metaphor, or with Montesquieu’s division of powers. The Modern approach to interactions is either anchored into the representation of one pole being active or dominating (the subject) and the other pole being inert or dominated (nature, object, servant) or, else, anchored in the notion of conflicting interests or dilemmas. In this framework, the notion of equality is straightjacketed and cannot be embodied. As we have seen, this Modern straitjacket leads to approaching freedom with control and autonomy, constrained by the fact that Man is, unfortunately, not alone. Hence, in the Modern approach to humanness and freedom, plurality is a constraint, not a condition, while for relational selves, freedom is grounded in plurality.

    2) From Watchdogging to Accountability and Intelligibility

    If the quest for transparency and control is as illusory and worthless for relational selves, as it was instrumental for rational subjects, this does not mean that anything goes. Interactions among plural agents can only take place satisfactorily if basic and important conditions are met.  Relational selves are in high need of fairness towards themselves and accountability of others. Deception and humiliation[14] should certainly be avoided as basic conditions enabling decency in the public space.

    Once equipped with this concept of the relational self as the canonical figure of what can account for political agents, be they men, women, corporations and even States. In a hyperconnected era, one can indeed see clearly why the recommendations Pasquale offers in his final two chapters “Watching (and Improving) the Watchers” and “Towards an Intelligible Society,” are so important. Indeed, if watchdogging the watchers has been criticized earlier in this review as an exhausting laboring activity that does not deliver on accountability, improving the watchers goes beyond watchdogging and strives for a greater accountability. With regard to intelligibility, I think that it is indeed much more meaningful and relevant than transparency.

    Pasquale invites us to think carefully about regimes of disclosure, along three dimensions:  depth, scope and timing. He calls for fair data practices that could be enhanced by establishing forms of supervision, of the kind that have been established for checking on research practices involving human subjects. Pasquale suggests that each person is entitled to an explanation of the rationale for the decision concerning them and that they should have the ability to challenge that decision. He recommends immutable audit logs for holding spying activities to account. He calls also for regulatory measures compensating for the market failures arising from the fact that dominant platforms are natural monopolies. Given the importance of reputation and ranking and the dominance of Google, he argues that the First Amendment cannot be mobilized as a wild card absolving internet giants from accountability. He calls for a “CIA for finance” and a “Corporate NSA,” believing governments should devote more effort to chasing wrongdoings from corporate actors. He argues that the approach taken in the area of Health Fraud Enforcement could bear fruit in finance, search and reputation.

    What I appreciate in Pasquale’s call for intelligibility is that it does indeed calibrate the needs of relational selves to interact with each other, to make sound decisions and to orient themselves in the world. Intelligibility is different from omniscience-omnipotence. It is about making sense of the world, while keeping in mind that there are different ways to do so. Intelligibility connects relational selves to the world surrounding them and allows them to act with other and move around. In the last chapter, Pasquale mentions the importance of restoring trust and the need to nurture a public space in the hyperconnected era. He calls for an end game to the Black Box. I agree with him that conscious deception inherently dissolves plurality and the common world, and needs to be strongly combatted, but I think that a lot of what takes place today goes beyond that and is really new and unchartered territories and horizons for humankind. With plurality, we can also embrace contingency in a less dramatic way that we used to in the Modern era. Contingency is a positive approach to un-certainty. It accounts for the openness of the future. The very word un-certainty is built in such a manner that certainty is considered the ideal outcome.

    4. WWW, or Welcome to the World of Women or a World Welcoming Women[15]

    To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

    But this situation may be looked at more optimistically as an opportunity for women’s voices and thoughts to go mainstream and be listened to. Now that equality between women and men is enshrined in the political and legal systems of the EU and the US, concretely, women have been admitted to the status of “rational subject”, but that does not dissolve its masculine origin, and the oddness or uneasiness for women to embrace this figure. Indeed, it was forged by men with men in mind, women, for those men, being indexed on nature. Mainstreaming the figure of the relational self, born in the mind of Arendt, will be much more inspiring and empowering for women, than was the rational subject. In fact, this enhances their agency and the performativity of their thoughts and theories. So, are we heading towards a world welcoming women?

    In conclusion, the advent of Big Data can be looked at in two ways. The first one is to look at it as the endpoint of the materialisation of all the promises and fears of Modern times. The second one is to look at it as a wake-up call for a new beginning; indeed, by making obvious the absurdity or the price of going all the way down to the consequences of the Modern conceptual frameworks, it calls on thinking on new grounds about how to make sense of the human condition and make it thrive. The former makes humans redundant, is self-fulfilling and does not deserve human attention and energy. Without any hesitation, I opt for the latter, i.e. the wake-up call and the new beginning.

    Let’s engage in this hyperconnected era bearing in mind Virginia Woolf’s “Think we must”[16] and, thereby, shape and honour the human condition in the 21st century.
    _____

    Nicole Dewandre has academic degrees in engineering, economics and philosophy. She is a civil servant in the European Commission, since 1983. She was advisor to the President of the Commission, Jacques Delors, between 1986 and 1993. She then worked in the EU research policy, promoting gender equality, partnership with civil society and sustainability issues. Since 2011, she has worked on the societal issues related to the deployment of ICT technologies. She has published widely on organizational and political issues relating to ICTs.

    The views expressed in this article are the sole responsibility of the author and in no way represent the view of the European Commission and its services.

    Back to the essay
    _____

    Acknowledgments: This review has been made possible by the Faculty of Law of the University of Maryland in Baltimore, who hosted me as a visiting fellow for the month of September 2015. I am most grateful to Frank Pasquale, first for having written this book, but also for engaging with me so patiently over the month of September and paying so much attention to my arguments, even suggesting in some instances the best way for making my points, when I was diverging from his views. I would also like to thank Jérôme Kohn, director of the Hannah Arendt Center at the New School for Social Research, for his encouragements in pursuing the mobilisation of Hannah Arendt’s legacy in my professional environment. I am also indebted, and notably for the conclusion, to the inspiring conversations I have had with Shauna Dillavou, excecutive director of CommunityRED, and Soraya Chemaly, Washington-based feminist writer, critic and activist. Last, and surely not least, I would like to thank David Golumbia for welcoming this piece in his journal and for the care he has put in editing this text written by a non-English native speaker.

    [1] This change of perspective, in itself, has the interesting side effect to take the carpet under the feet of those “addicted to speed”, as Pasquale is right when he points to this addiction (195) as being one of the reasons “why so little is being done” to address the challenges arising from the hyperconnected era.

    [2] Williams, Truth, Autonomy, and Speech, New York: New York University Press, 2004 (35).

    [3] See, e.g., Nicole Dewandre, ‘Rethinking the Human Condition in a Hyperconnected Era: Why Freedom Is Not About Sovereignty But About Beginnings’, in The Onlife Manifesto, ed. Luciano Floridi, Springer International Publishing, 2015 (195–215).

    [4]Williams, Truth, Autonomy, and Speech (32).

    [5] Literally: “spoken words fly; written ones remain”

    [6] Apart from action, Arendt distinguishes two other fundamental human activities that together with action account for the vita activa. These two other activities are labour and work. Labour is the activity that men and women engage in to stay alive, as organic beings: “the human condition of labour is life itself”. Labour is totally pervaded by necessity and processes. Work is the type of activity men and women engage with to produce objects and inhabit the world: “the human condition of work is worldliness”. Work is pervaded by a means-to-end logic or an instrumental rationale.

    [7] Arendt, The Human Condition, 1958; reissued, University of Chicago Press, 1998 (159).

    [8] Arendt, The Human Condition (160).

    [9] Seyla Benhabib, The Reluctant Modernism of Hannah Arendt, Revised edition, Lanham, MD: Rowman & Littlefield Publishers, 2003, (211).

    [10] See notably the work of Lynn Stout and the Frank Bold Foundation’s project on the purpose of corporations.

    [11] This expression has been introduced in the Onlife Initiative by Charles Ess, but in a different perspective. The Ess’ relational self is grounded in pre-Modern and Eastern/oriental societies. He writes: “In “Western” societies, the affordances of what McLuhan and others call “electric media,” including contemporary ICTs, appear to foster a shift from the Modern Western emphases on the self as primarily rational, individual, and thereby an ethically autonomous moral agent towards greater (and classically “Eastern” and pre-Modern) emphases on the self as primarily emotive, and relational—i.e., as constituted exclusively in terms of one’s multiple relationships, beginning with the family and extending through the larger society and (super)natural orders”. Ess, in Floridi, ed.,  The Onlife Manifesto (98).

    [12] Williams, Truth, Autonomy, and Speech.

    [13] Hannah Arendt and Jerome Kohn, Between Past and Future, Revised edition, New York: Penguin Classics, 2006 (55).

    [14] See Richard Rorty, Contingency, Irony, and Solidarity, New York: Cambridge University Press, 1989.

    [15] I thank Shauna Dillavou for suggesting these alternate meanings for “WWW.”

    [16] Virginia Woolf, Three Guineas, New York: Harvest, 1966.

  • Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    a review of Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014)

    by Frank Pasquale

    ~

    Business futurism is a grim discipline. Workers must either adapt to the new economic realities, or be replaced by software. There is a “race between education and technology,” as two of Harvard’s most liberal economists insist. Managers should replace labor with machines that require neither breaks nor sick leave. Superstar talents can win outsize rewards in the new digital economy, as they now enjoy global reach, but they will replace thousands or millions of also-rans. Whatever can be automated, will be, as competitive pressures make fairly paid labor a luxury.

    Thankfully, Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age (2MA)  downplays these zero-sum tropes. Brynjolffson & McAfee (B&M) argue that the question of distribution of the gains from automation is just as important as the competitions for dominance it accelerates. 2MA invites readers to consider how societies will decide what type of bounty from automation they want, and what is wanted first.  The standard, supposedly neutral economic response (“whatever the people demand, via consumer sovereignty”) is unconvincing. As inequality accelerates, the top 5% (of income earners) do 35% of the consumption. The top 1% is responsible for an even more disproportionate share of investment. Its richest members can just as easily decide to accelerate the automation of the wealth defense industry as they can allocate money to robotic construction, transportation, or mining.

    A humane agenda for automation would prioritize innovations that complement (jobs that ought to be) fulfilling vocations, and substitute machines for dangerous or degrading work. Robotic meat-cutters make sense; robot day care is something to be far more cautious about. Most importantly, retarding automation that controls, stigmatizes, and cheats innocent people, or sets up arms races with zero productive gains, should be a much bigger part of public discussions of the role of machines and software in ordering human affairs.

    2MA may set the stage for such a human-centered automation agenda. Its diagnosis of the problem of rapid automation (described in Part I below) is compelling. Its normative principles (II) are eclectic and often humane. But its policy vision (III) is not up to the challenge of channeling and sequencing automation. This review offers an alternative, while acknowledging the prescience and insight of B&M’s work.

    I. Automation’s Discontents

    For B&M, the acceleration of automation ranks with the development of agriculture, or the industrial revolution, as one of the “big stories” of human history (10-12). They offer an account of the “bounty and spread” to come from automation. “Bounty” refers to the increasing “volume, variety, and velocity” of any imaginable service or good, thanks to its digital reproduction or simulation (via, say, 3-D printing or robots). “Spread” is “ever-bigger differences among people in economic success” that they believe to be just as much an “economic consequence” of automation as bounty.[1]

    2MA briskly describes various human workers recently replaced by computers.  The poor souls who once penned corporate earnings reports for newspapers? Some are now replaced by Narrative Science, which seamlessly integrates new data into ready-made templates (35). Concierges should watch out for Siri (65). Forecasters of all kinds (weather, home sales, stock prices) are being shoved aside by the verdicts of “big data” (68). “Quirky,” a startup, raised $90 million by splitting the work of making products between a “crowd” that “votes on submissions, conducts research, suggest improvements, names and brands products, and drives sales” (87), and Quirky itself, which “handles engineering, manufacturing, and distribution.” 3D printing might even disintermediate firms like Quirky (36).

    In short, 2MA presents a kaleidoscope of automation realities and opportunities. B&M skillfully describe the many ways automation both increases the “size of the pie,” economically, and concentrates the resulting bounty among the talented, the lucky, and the ruthless. B&M emphasize that automation is creeping up the value chain, potentially substituting machines for workers paid better than the average.

    What’s missing from the book are the new wave of conflicts that would arise if those at very top of the value chain (or, less charitably, the rent and tribute chain) were to be replaced by robots and algorithms. When BART workers went on strike, Silicon Valley worthies threatened to replace them with robots. But one could just as easily call for the venture capitalists to be replaced with algorithms. Indeed, one venture capital firm added an algorithm to its board in 2013.  Travis Kalanick, the CEO of Uber, responded to a question on driver wage demands by bringing up the prospect of robotic drivers. But given Uber’s multiple legal and PR fails in 2014, a robot would probably would have done a better job running the company than Kalanick.

    That’s not “crazy talk” of communistic visions along the lines of Marx’s “expropriate the expropriators,” or Chile’s failed Cybersyn.[2]  Thiel Fellow and computer programming prodigy Vitaly Bukherin has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.”[3] Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite. To attract and retain the support of investors, a firm must obtain certain results, and the short-run paths to attaining them (such as cutting wages, or financial engineering) are increasingly narrow.  And in today’s investment environment of rampant short-termism, the short is often the only term there is.

    In the long run, a secure firm can tolerate experiments. Little wonder, then, that the largest firm at the cutting edge of automation—Google—has a secure near-monopoly in search advertising in numerous markets. As Peter Thiel points out in his recent From Zero to One, today’s capitalism rewards the best monopolist, not the best competitor. Indeed, even the Department of Justice’s Antitrust Division appeared to agree with Thiel in its 1995 guidelines on antitrust enforcement in innovation markets. It viewed intellectual property as a good monopoly, the rightful reward to innovators for developing a uniquely effective process or product. And its partner in federal antitrust enforcement, the Federal Trade Commission, has been remarkably quiescent in response to emerging data monopolies.

    II. Propertizing Data

    For B&M, intellectual property—or, at least, the returns accruing to intellectual insight or labor—plays a critical role in legitimating inequalities arising out of advanced technologies.  They argue that “in the future, ideas will be the real scarce inputs in the world—scarcer than both labor and capital—and the few who provide good ideas will reap huge rewards.”[4] But many of the leading examples of profitable automation are not “ideas” per se, or even particularly ingenious algorithms. They are brute force feats of pattern recognition: for example, Google’s studying past patterns of clicks to see what search results, and what ads, are personalized to delight and persuade each of its hundreds of millions of users. The critical advantage there is the data, not the skill in working with it.[5] Google will demur, but if they were really confident, they’d license the data to other firms, confident that others couldn’t best their algorithmic prowess.  They don’t, because the data is their critical, self-reinforcing advantage. It is a commonplace in big data literatures to say that the more data one has, the more valuable any piece of it becomes—something Googlers would agree with, as long as antitrust authorities aren’t within earshot.

    As sensors become more powerful and ubiquitous, feats of automated service provision and manufacture become more easily imaginable.  The Baxter robot, for example, merely needs to have a trainer show it how to move in order to ape the trainer’s own job. (One is reminded of the stories of US workers flying to India to train their replacements how to do their job, back in the day when outsourcing was the threat du jour to U.S. living standards.)

    how to train a robot
    How to train a Baxter robot. Image source: Inc. 

    From direct physical interaction with a robot, it is a short step to, say, programmed holographic or data-driven programming.  For example, a surveillance camera on a worker could, after a period of days, months, or years, potentially record every movement or statement of the worker, and replicate it, in response to whatever stimuli led to the prior movements or statements of the worker.

    B&M appear to assume that such data will be owned by the corporations that monitor their own workers.  For example, McDonalds could train a camera on every cook and cashier, then download the contents into robotic replicas. But it’s just as easy to imagine a legal regime where, say, workers’ rights to the data describing their movements would be their property, and firms would need to negotiate to purchase the rights to it.  If dance movements can be copyrighted, so too can the sweeps and wipes of a janitor. Consider, too, that the extraordinary advances in translation accomplished by programs like Google Translate are in part based on translations by humans of United Nations’ documents released into the public domain.[6] Had the translators’ work not been covered by “work-made-for-hire” or similar doctrines, they might well have kept their copyrights, and shared in the bounty now enjoyed by Google.[7]

    Of course, the creativity of translation may be greater than that displayed by a janitor or cashier. Copyright purists might thus reason that the merger doctrine denies copyrightability to the one best way (or small suite of ways) of doing something, since the idea of the movement and its expression cannot be separated. Grant that, and one could still imagine privacy laws giving workers the right to negotiate over how, and how pervasively, they are watched. There are myriad legal regimes governing, in minute detail, how information flows and who has control over it.

    I do not mean to appropriate here Jaron Lanier’s ideas about micropayments, promising as they may be in areas like music or journalism. A CEO could find some critical mass of stockers or cooks or cashiers to mimic even if those at 99% of stores demanded royalties for the work (of) being watched. But the flexibility of legal regimes of credit, control, and compensation is under-recognized. Living in a world where employers can simply record everything their employees do, or Google can simply copy every website that fails to adopt “robots.txt” protection, is not inevitable. Indeed, according to renowned intellectual property scholar Oren Bracha, Google had to “stand copyright on its head” to win that default.[8]

    Thus B&M are wise to acknowledge the contestability of value in the contemporary economy.  For example, they build on the work of MIT economists Daron Acemoglu and David Autor to demonstrate that “skill biased technical change” is a misleading moniker for trends in wage levels.  The “tasks that machines can do better than humans” are not always “low-skill” ones (139). There is a fair amount of play in the joints in the sequencing of automation: sometimes highly skilled workers get replaced before those with a less complex and difficult-to-learn repertoire of abilities.  B&M also show that the bounty predictably achieved via automation could compensate the “losers” (of jobs or other functions in society) in the transition to a more fully computerized society. By seriously considering the possibility of a basic income (232), they evince a moral sensibility light years ahead of the “devil-take-the-hindmost” school of cyberlibertarianism.

    III. Proposals for Reform

    Unfortunately, some of B&M’s other ideas for addressing the possibility of mass unemployment in the wake of automation are less than convincing.  They praise platforms like Lyft for providing new opportunities for work (244), perhaps forgetting that, earlier in the book, they described the imminent arrival of the self-driving car (14-15). Of course, one can imagine decades of tiered driving, where the wealthy get self-driving cars first, and car-less masses turn to the scrambling drivers of Uber and Lyft to catch rides. But such a future seems more likely to end in a deflationary spiral than  sustainable growth and equitable distribution of purchasing power. Like the generation traumatized by the Great Depression, millions subjected to reverse auctions for their labor power, forced to price themselves ever lower to beat back the bids of the technologically unemployed, are not going to be in a mood to spend. Learned helplessness, retrenchment, and miserliness are just as likely a consequence as buoyant “re-skilling” and self-reinvention.

    Thus B&M’s optimism about what they call the “peer economy” of platform-arranged production is unconvincing.  A premier platform of digital labor matching—Amazon’s Mechanical Turk—has occasionally driven down the wage for “human intelligence tasks” to a penny each. Scholars like Trebor Scholz and Miriam Cherry have discussed the sociological and legal implications of platforms that try to disclaim all responsibility for labor law or other regulations. Lilly Irani’s important review of 2MA shows just how corrosive platform capitalism has become. “With workers hidden in the technology, programmers can treat [them] like bits of code and continue to think of themselves as builders, not managers,” she observes in a cutting aside on the self-image of many “maker” enthusiasts.

    The “sharing economy” is a glidepath to precarity, accelerating the same fate for labor in general as “music sharing services” sealed for most musicians. The lived experience of many “TaskRabbits,” which B&M boast about using to make charts for their book, cautions against reliance on disintermediation as a key to opportunity in the new digital economy. Sarah Kessler describes making $1.94 an hour labeling images for a researcher who put the task for bid on Mturk.  The median active TaskRabbit in her neighborhood made $120 a week; Kessler cleared $11 an hour on her best day.

    Resistance is building, and may create fairer terms online.  For example, Irani has helped develop a “Turkopticon” to help Turkers rate and rank employers on the site. Both Scholz and Mike Konczal have proposed worker cooperatives as feasible alternatives to Uber, offering drivers both a fairer share of revenues, and more say in their conditions of work. But for now, the peer economy, as organized by Silicon Valley and start-ups, is not an encouraging alternative to traditional employment. It may, in fact, be worse.

    Therefore, I hope B&M are serious when they say “Wild Ideas [are] Welcomed” (245), and mention the following:

    • Provide vouchers for basic necessities. . . .
    • Create a national mutual fund distributing the ownership of capital widely and perhaps inalienably, providing a dividend stream to all citizens and assuring the capital returns do not become too highly concentrated.
    • Depression-era Civilian Conservation Corps to clean up the environment, build infrastructure.

    Speaking of the non-automatable, we could add the Works Progress Administration (WPA) to the CCC suggestion above.  Revalue the arts properly, and the transition may even add to GDP.

    Soyer, Artists on the WPA
    Moses Soyer, “Artists on WPA” (1935). Image source: Smithsonian American Art Museum

    Unfortunately, B&M distance themselves from the ideas, saying, “we include them not necessarily to endorse them, but instead to spur further thinking about what kinds of interventions will be necessary as machines continue to race ahead” (246).  That is problematic, on at least two levels.

    First, a sophisticated discussion of capital should be at the core of an account of automation,  not its periphery. The authors are right to call for greater investment in education, infrastructure, and basic services, but they need a more sophisticated account of how that is to be arranged in an era when capital is extraordinarily concentrated, its owners have power over the political process, and most show little to no interest in long-term investment in the skills and abilities of the 99%. Even the purchasing power of the vast majority of consumers is of little import to those who can live off lightly taxed capital gains.

    Second, assuming that “machines continue to race ahead” is a dodge, a refusal to name the responsible parties running the machines.  Someone is designing and purchasing algorithms and robots. Illah Reza Nourbaksh’s Robot Futures suggests another metaphor:

    Today most nonspecialists have little say in charting the role that robots will play in our lives.  We are simply watching a new version of Star Wars scripted by research and business interests in real time, except that this script will become our actual world. . . . Familiar devices will become more aware, more interactive and more proactive; and entirely new robot creatures will share our spaces, public and private, physical and digital. . . .Eventually, we will need to read what they write, we will have to interact with them to conduct our business transactions, and we will often mediate our friendships through them.  We will even compete with them in sports, at jobs, and in business. [9]

    Nourbaksh nudges us closer to the truth, focusing on the competitive angle. But the “we” he describes is also inaccurate. There is a group that will never have to “compete” with robots at jobs or in business—rentiers. Too many of them are narrowly focused on how quickly they can replace needy workers with undemanding machines.

    For the rest of us, another question concerning automation is more appropriate: how much can we be stuck with? A black-card-toting bigshot will get the white glove treatment from AmEx; the rest are shunted into automated phone trees. An algorithm determines the shifts of retail and restaurant workers, oblivious to their needs for rest, a living wage, or time with their families.  Automated security guards, police, and prison guards are on the horizon. And for many of the “expelled,” the homines sacres, automation is a matter of life and death: drone technology can keep small planes on their tracks for hours, days, months—as long as it takes to execute orders.

    B&M focus on “brilliant technologies,” rather than the brutal or bumbling instances of automation.  It is fun to imagine a souped-up Roomba making the drudgery of housecleaning a thing of the past.  But domestic robots have been around since 2000, and the median wage-earner in the U.S. does not appear to be on a fast track to a Jetsons-style life of ease.[10] They are just as likely to be targeted by the algorithms of the everyday, as they are to be helped by them. Mysterious scoring systems routinely stigmatize persons, without them even knowing. They reflect the dark side of automation—and we are in the dark about them, given the protections that trade secrecy law affords their developers.

    IV. Conclusion

    Debates about robots and the workers “struggling to keep up” with them are becoming stereotyped and stale. There is the standard economic narrative of “skill-biased technical change,” which acts more as a tautological, post hoc, retrodictive, just-so story than a coherent explanation of how wages are actually shifting. There is cyberlibertarian cornucopianism, as Google’s Ray Kurzweil and Eric Schmidt promise there is nothing to fear from an automated future. There is dystopianism, whether intended as a self-preventing prophecy, or entertainment. Each side tends to talk past the other, taking for granted assumptions and values that its putative interlocutors reject out of hand.

    Set amidst this grim field, 2MA is a clear advance. B&M are attuned to possibilities for the near and far future, and write about each in accessible and insightful ways.  The authors of The Second Machine Age claim even more for it, billing it as a guide to epochal change in our economy. But it is better understood as the kind of “big idea” book that can name a social problem, underscore its magnitude, and still dodge the elaboration of solutions controversial enough to scare off celebrity blurbers.

    One of 2MA’s blurbers, Clayton Christensen, offers a backhanded compliment that exposes the core weakness of the book. “[L]earners and teachers alike are in a perpetual mode of catching up with what is possible. [The Second Machine Age] frames a future that is genuinely exciting!” gushes Christensen, eager to fold automation into his grand theory of disruption. Such a future may be exciting for someone like Christensen, a millionaire many times over who won’t lack for food, medical care, or housing if his forays fail. But most people do not want to be in “perpetually catching up” mode. They want secure and stable employment, a roof over their heads, decent health care and schooling, and some other accoutrements of middle class life. Meaning is found outside the economic sphere.

    Automation could help stabilize and cheapen the supply of necessities, giving more persons the time and space to enjoy pursuits of their own choosing. Or it could accelerate arms races of various kinds: for money, political power, armaments, spying, stock trading. As long as purchasing power alone—whether of persons or corporations—drives the scope and pace of automation, there is little hope that the “brilliant technologies” B&M describe will reliably lighten burdens that the average person experiences. They may just as easily entrench already great divides.

    All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers.  That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners.

    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His recent book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    [1] One can quibble with the idea of automation as necessarily entailing “bounty”—as Yves Smith has repeatedly demonstrated, computer systems can just as easily “crapify” a process once managed well by humans. Nor is “spread” a necessary consequence of automation; well-distributed tools could well counteract it. It is merely a predictable consequence, given current finance and business norms and laws.

    [2] For a definition of “crazy talk,” see Neil Postman, Stupid Talk, Crazy Talk: How We Defeat Ourselves by the Way We Talk and What to Do About It (Delacorte, 1976). For Postman, “stupid talk” can be corrected via facts, whereas “crazy talk” “establishes different purposes and functions than the ones we normally expect.” If we accept the premise of labor as a cost to be minimized, what better to cut than the compensation of the highest paid persons?

    [3] Conversation with Sam Frank at the Swiss Institute, Dec. 16, 2014, sponsored by Triple Canopy.

    [4] In Brynjolfsson, McAfee, and Michael Spence, “New World Order: Labor, Capital, and Ideas in the Power Law Economy,” an article promoting the book. Unfortunately, as with most statements in this vein, B&M&S give us little idea how to identify a “good idea” other than one that “reap[s] huge rewards”—a tautology all too common in economic and business writing.

    [5] Frank Pasquale, The Black Box Society (Harvard University Press, 2015).

    [6] Programs, both in the sense of particular software regimes, and the program of human and technical efforts to collect and analyze the translations that were the critical data enabling the writing of the software programs behind Google Translate.

    [9] Illah Reza Nourbaksh, Robot Futures (MIT Press, 2013), pp. xix-xx.

    [10] Erwin Prassler and Kazuhiro Kosuge, “Domestic Robotics,” in Bruno Siciliano and Oussama Khatib, eds., Springer Handbook of Robotics (Springer, 2008), p. 1258.

  • Frank Pasquale — Capital’s Offense: Law’s Entrenchment of Inequality (On Piketty, “Capital in the 21st Century”)

    Frank Pasquale — Capital’s Offense: Law’s Entrenchment of Inequality (On Piketty, “Capital in the 21st Century”)

    a review of Thomas Piketty, Capital in the Twenty-First Century (Harvard University Press, 2014)

    by Frank Pasquale

    ~

    Thomas Piketty’s Capital in the Twenty-First Century has succeeded both commercially and as a work of scholarship. Capital‘s empirical research is widely praised among economists—even by those who disagree with its policy prescriptions.  It is also the best-selling book in the century-long history of Harvard University Press, and a rare work of scholarship to reach the top spot on Amazon sales rankings.[1]

    Capital‘s main methodological contribution is to bring economic, sociological, and even literary perspectives to bear in a work of economics.[2] The book bridges positive and normative social science, offering strong policy recommendations for increased taxation of the wealthiest. It is also an exploration of historical trends.[3] In Capital, fifteen years of careful archival research culminate in a striking thesis: capitalism exacerbates inequality over time. There is no natural tendency for markets themselves, or even ordinary politics, to slow accumulation by top earners.[4]

    This review explains Piketty’s analysis and its relevance to law and social theory, drawing lessons for the re-emerging field of political economy. Piketty’s focus on long-term trends in inequality suggests that many problems traditionally explained as sector-specific (such as varied educational outcomes) are epiphenomenal with regard to increasingly unequal access to income and capital. Nor will a narrowing of purported “skills gaps” do much to improve economic security, since opportunity to earn money via labor matters far less in a world where capital is the key to enduring purchasing power. Policymakers and attorneys ignore Piketty at their peril, lest isolated projects of reform end up as little more than rearranging deck chairs amidst titanically unequal opportunities.

    Inequality, Opportunity, and the Rigged Game

    Capital weaves together description and prescription, facts and values, economics, politics, and history, with an assured and graceful touch. So clear is Piketty’s reasoning, and so compelling the enormous data apparatus he brings to bear, that few can doubt he has fundamentally altered our appreciation of the scope, duration, and intensity of inequality.[5]

    Piketty’s basic finding is that, absent extraordinary political interventions, the rate of return on capital (r) is greater than the rate of growth of the economy generally (g), which Piketty expresses via the now-famous formula r > g.[6] He finds that this relationship persists over time, and in the many countries with reliable data on wealth and income.[7] This simple inequality relationship has many troubling implications, especially in light of historical conflicts between capital and labor.

    Most persons support themselves primarily by wages—that is, what they earn from their labor. As capital takes more of economic output (an implication of r > g persisting over time), less is left for labor. Thus if we are concerned about unequal incomes and living standards, we cannot simply hope for a rising tide of growth to lift the fortunes of those in the bottom quintiles of the income and wealth distribution.  As capital concentrates, its owners take an ever larger share of income—unless law intervenes and demands some form of redistribution.[8] As the chart below (by Bard economist Pavlina Tcherneva, based on Piketty’s data) shows, we have now reached the point where the US economy is not simply distributing the lion’s share of economic gains to top earners; it is actively redistributing extant income of lower decile earners upwards:

    chart of doom

    In 2011, 93% of the gains in income during the economic “recovery” went to the top 1%.  From 2009 to 2011, “income gains to the top 1% … were 121% of all income increases,” because “incomes to the bottom 99% fell by 0.4%.”[9] The trend continued through 2012.

    Fractal inequality prevails up and down the income scale.[10] The top 15,000 tax returns in the US reported an average taxable income of $26 million in 2005—at least 400 times greater than the median return.[11] Moreover, Larry Bartels’s book, Unequal Democracy, graphs these trends over decades.[12] Bartels shows that, from 1945-2007, the 95th percentile did much better than those at lower percentiles.[13] He then shows how those at the 99.99th percentile did spectacularly better than those at the 99.9th, 99.5th, 99th, and 95th percentiles.[14] There is some evidence that even within that top 99.99th percentile, inequality reigned.  In 2005, the “Fortunate 400″—the 400 households with the highest earnings in the U.S.—made on average $213.9 million apiece, and the cutoff for entry into this group was a $100 million income—about four times the average income of $26 million prevailing in the top 15,000 returns.[15] As Danny Dorling observed in a recent presentation at the RSA, for those at the bottom of the 1%, it can feel increasingly difficult to “keep up with the Joneses,” Adelsons, and Waltons. Runaway incomes at the very top leave those slightly below the “ultra-high net worth individual” (UHNWI) cut-off ill-inclined to spread their own wealth to the 99%.

    Thus inequality was well-documented in these, and many other works, by the time Piketty published Capital—indeed, other authors often relied on the interim reports released by Piketty and his team of fellow inequality researchers over the past two decades.[16] The great contribution of Capital is to vastly expand the scope of the inquiry, over space and time. The book examines records in France going back to the 19th century, and decades of data in Germany, Japan, Great Britain, Sweden, India, China, Portugal, Spain, Argentina, Switzerland, and the United States.[17]

    The results are strikingly similar. The concentration of capital (any asset that generates income or gains in monetary value) is a natural concomitant of economic growth under capitalism—and tends to intensify if growth slows or stops.[18] Inherited fortunes become more important than those earned via labor, since the “miracle of compound interest” overwhelms any particularly hard-working person or ingenious idea. Once fortunes grow large enough, their owners can simply live off the interest and dividends they generate, without ever drawing on the principal. At the “escape velocity” enjoyed by some foundations and ultra-rich individuals, annual expenses are far less than annual income, precipitating ever-greater principal. This is Warren Buffett’s classic “snowball” of wealth—and we should not underestimate its ability to purchase the political favors that help constitute Buffettian “moats” around the businesses favored by the likes of Berkshire-Hathaway.[19]  Dynasties form and entrench their power.  If they can make capital pricey enough, even extraordinary innovations may primarily benefit their financers.

    Deepening the Social Science of Political Economy

    Just as John Rawls’s Theory of Justice laid a foundation for decades of writing on social justice, Piketty’s work is so generative that one could envision whole social scientific fields revitalized by it.[20] Political economy is the most promising, a long tradition of (as Piketty puts it) studying the “ideal role of the state in the economic and social organization of a country.”[21] Integrating the long-divided fields of politics and economics, a renewal of modern political economy could unravel “wicked problems” neither states nor markets alone can address.[22]

    But the emphasis in Piketty’s definition of political economy on “a country,” versus countries, or the world, is in tension with the global solutions he recommends for the regulation of capital. The dream of neoliberal globalization was to unite the world via markets.[23] Anti-globalization activists have often advanced a rival vision of local self-determination, predicated on overlaps between political and economic boundaries. State-bound political economy could theorize those units. But the global economy is, at present, unforgiving of autarchy and unlikely to move towards it.

    Capital tends to slip the bonds of states, migrating to tax havens. In the rarefied world of the global super-rich, financial privacy is a purchasable commodity.  Certainly there are always risks of discovery, or being taken advantage of by a disreputable tax shelter broker or shady foreign bank.  But for many wealthy individuals, tax havenry has been a rite of passage on the way to membership in a shadowy global elite. Piketty’s proposed global wealth tax would need international enforcement—for even the Foreign Accounts Tax Compliance Act (FATCA) imposed via America’s fading hegemony (and praised by Piketty) has only begun to address the problem of hidden (or runaway) wealth (and income).[24]

    It will be very difficult to track down the world’s hidden fortunes and tax them properly. Had Piketty consulted more legal sources, he may have acknowledged the problem more adequately in Capital. He recommends “automatic information exchange” among tax authorities, which is an excellent principle to improve enforcement. But actually implementing this principle could require fine-grained regulation of IT systems, deployment of whole new types of surveillance, and even uniform coding (via, say, standard legal entity identifiers, or LEIs) globally. More frankly acknowledging the difficulty of shepherding such legislation globally could have led to a more convincing (and comprehensive) examination of the shortcomings of globalized capitalism.

    In several extended interviews on Capital (with CNN Money, Econtalk, The New York Times, Huffington Post, and the New Republic, among others), Piketty pledges fealty to markets, praising their power to promote production and innovation. Never using the term “industrial policy” in his book, Piketty hopes that law may make the bounty of extant economic arrangements accessible to all, rather than changing the nature of those arrangements. But we need to begin to ask whether our very process of creating goods and services itself impedes better distribution of them.

    Unfortunately, mainstream economics itself often occludes this fundamental question. When distributive concerns arise, policymakers can either substantively intervene to reshape the benefits and burdens of commerce (a strategy economists tend to derogate as dirigisme), or may, post hoc, use taxes and transfer programs to redistribute income and wealth. For establishment economists, redistribution (happening after initial allocations by “the market”) is almost always considered more efficient than “distortion” of markets by regulation, public provision, or “predistribution.”[25]

    Tax law has historically been our primary way of arranging such redistribution, and Piketty makes it a focus of the concluding part of his book, called “Regulating Capital.” Piketty laments the current state of tax reporting and enforcement. Very wealthy individuals have developed complex webs of shell entities to hide their true wealth and earnings.[26] As one journalist observed, “Behind a New York City deed, there may be a Delaware LLC, which may be managed by a shell company in the British Virgin Islands, which may be owned by a trust in the Isle of Man, which may have a bank account in Liechtenstein managed by the private banker in Geneva. The true owner behind the structure might be known only to the banker.”[27] This is the dark side of globalization: the hidden structures that shield the unscrupulous from accountability.[28]

    The most fundamental tool of tax secrecy is separation: between persons and their money, between corporations and the persons who control them, between beneficial and nominal controllers of wealth. When money can pass between countries as easily as digital files, skilled lawyers and accountants can make it impossible for tax authorities to uncover the beneficial owners of assets (and the income streams generated by those assets).

    Piketty believes that one way to address inequality is strict enforcement of laws like America’s FATCA.[29] But the United States cannot accomplish much without pervasive global cooperation.  Thus the international challenge of inequality haunts Capital. As money concentrates in an ever smaller global “superclass” (to use David J. Rothkopf’s term), it’s easier for it to escape any ruling authority.[30] John Chung has characterized today’s extraordinary concentrations of wealth as a “death of reference” in our monetary system and its replacement with “a total relativity.”[31] He notes that “[i]n 2007, the average amount of annual compensation for the top twenty-five highest paid hedge fund managers was $892 million;” in the past few years, individual annual incomes in the group have reached two, three, or four billion dollars.  Today’s greatest hoards of wealth are digitized, as easily moved and hidden as digital files.

    We have no idea what taxes may be due from trillions of dollars in offshore wealth, or to what purposes it is directed.[32] In less-developed countries, dictators and oligarchs smuggle ill-gotten gains abroad.  Groups like Global Financial Integrity and the Tax Justice Network estimate that illicit financial flows out of poor countries (and into richer ones, often via tax havens) are ten times greater than the total sum of all development aid—nearly $1 trillion per year.  Given that the total elimination of extreme global poverty could cost about $175 billion per year for twenty years, this is not a trivial loss of funds—completely apart from what the developing world loses in the way of investment when its wealthiest residents opt to stash cash in secrecy jurisdictions.[33]

    An adviser to the Tax Justice Network once said that assessing money kept offshore is an “exercise in night vision,” like trying to measure “the economic equivalent of an astrophysical black hole.”[34] Shell corporations can hide connections between persons and their money, between corporations and the persons who control them, between beneficial and nominal owners. When enforcers in one country try to connect all these dots, there is usually another secrecy jurisdiction willing to take in the assets of the conniving. As the Tax Justice Network’s “TaxCast” exposes on an almost monthly basis, victories for tax enforcement in one developed country tend to be counterbalanced by a slide away from transparency elsewhere.

    Thus when Piketty recommends that “the only way to obtain tangible results is to impose automatic sanctions not only on banks but also on countries that refuse to require their financial institutions” to report on wealth and income to proper taxing authorities, one has to wonder: what super-institution will impose the penalties? Is this to be an ancillary function of the WTO?[35] Similarly, equating the imposition of a tax on capital with “the stroke of a pen” (568) underestimates the complexity of implementing such a tax, and the predictable forms of resistance that the wealth defense industry will engage in.[36] All manner of societal and cultural, public and private, institutions will need to entrench such a tax if it is to be a stable corrective to the juggernaut of r > g.[37]

    Given how much else the book accomplishes, this demand may strike some as a cavil—something better accomplished by Piketty’s next work, or by an altogether different set of allied social scientists.  But if Capital itself is supposed to model (rather than merely call for) a new discipline of political economy, it needs to provide more detail about the path from here to its prescriptions. Philosophers like Thomas Pogge and Leif Wenar, and lawyers like Terry Fisher and Talha Syed, have been quite creative in thinking through the actual institutional arrangements that could lead to better distribution of health care, health research, and revenues from natural resources.[38] They are not cited in Capital¸but their work could have enriched its institutional analysis greatly.

    An emerging approach to financial affairs, known as the Legal Theory of Finance (LTF), also offers illumination here, and should guide future policy interventions.  Led by Columbia Law Professor Katharina Pistor, an interdisciplinary research team of social scientists and attorneys have documented the ways in which law is constitutive of so-called financial markets.[39] Revitalizing the tradition of legal realism, Pistor has demonstrated the critical role of law in generating modern finance. Though law to some extent shapes all markets, in finance, its role is most pronounced.  The “products” traded are very little more than legal recognitions of obligations to buy or sell, own or owe. Their value can change utterly based on tiny changes to the bankruptcy code, SEC regulations, or myriad other laws and regulations.

    The legal theory of finance changes the dialogue about regulation of wealth.  The debate can now move beyond stale dichotomies like “state vs. market,” or even “law vs. technology.” While deregulationists mock the ability of regulators to “keep up with” the computational capacities of global banking networks, it is the regulators who made the rules that made the instantaneous, hidden transfer of financial assets so valuable in the first place. Such rules are not set in stone.

    The legal theory of finance also enables a more substantive dialogue about the central role of law in political economy. Not just tax rules, but also patent, trade, and finance regulation need to be reformed to make the wealthy accountable for productively deploying the wealth they have either earned or taken. Legal scholars have a crucial role to play in this debate—not merely as technocrats adjusting tax rules, but as advisors on a broad range of structural reforms that could ensure the economy’s rewards better reflected the relative contributions of labor, capital, and the environment.[40] Lawyers had a much more prominent role in the Federal Reserve when it was more responsive to workers’ concerns.[41]

    Imagined Critics as Unacknowledged Legislators

    A book is often influenced by its author’s imagined critics. Piketty, decorous in his prose style and public appearances, strains to fit his explosive results into the narrow range of analytical tools and policy proposals that august economists won’t deem “off the wall.”[42] Rather than deeply considering the legal and institutional challenges to global tax coordination, Piketty focuses on explaining in great detail the strengths and limitations of the data he and a team of researchers have been collecting for over a decade. But a renewed social science of political economy depends on economists’ ability to expand their imagined audience of critics, to those employing qualitative methodologies, to attorneys and policy experts working inside and outside the academy, and to activists and journalists with direct knowledge of the phenomena addressed.  Unfortunately, time that could have been valuably directed to that endeavor—either in writing Capital, or constructively shaping the extraordinary publicity the book received—has instead been diverted to shoring up the book’s reputation as rigorous economics, against skeptics who fault its use of data.

    To his credit, Piketty has won these fights on the data mavens’ own terms. The book’s most notable critic, Chris Giles at the Financial Times, tried to undermine Capital‘s conclusions by trumping up purported ambiguities in wealth measurement. His critique was rapidly dispatched by many, including Piketty himself.[43] Indeed, as Neil Irwin observed, “Giles’s results point to a world at odds not just with Mr. Piketty’s data, but also with that by other scholars and with the intuition of anyone who has seen what townhouses in the Mayfair neighborhood of London are selling for these days.”[44]

    One wonders if Giles reads his own paper. On any given day one might see extreme inequality flipping from one page to the next. For example, in a special report on “the fragile middle,” Javier Blas noted that no more than 12% of Africans earned over $10 per day in 2010—a figure that has improved little, if at all, since 1980.[45] Meanwhile, in the House & Home section on the same day, Jane Owen lovingly described the grounds of the estate of “His Grace Henry Fitzroy, the 12th Duke of Grafton.” The grounds cost £40,000 to £50,000 a year to maintain, and were never “expected to do anything other than provide pleasure.”[46] England’s revanchist aristocracy makes regular appearances in the Financial TimesHow to Spend It” section as well, and no wonder: as Oxfam reported in March, 2014, Britain’s five richest families have more wealth than its twelve million poorest people.[47]

    Force and Capital

    The persistence of such inequalities is as much a matter of law (and the force behind it to, say, disperse protests and selectively enforce tax regulations), as it is a natural outgrowth of the economic forces driving r and g. To his credit, Piketty does highlight some of the more grotesque deployments of force on behalf of capital. He begins Part I (“Income and Capital”) and ends Part IV (“Regulating Capital”) by evoking the tragic strike at the Lonmin Mine in South Africa in August 2012.  In that confrontation, “thirty-four strikers were shot dead” for demanding pay of about $1,400 a month (there were making about $700).[48] Piketty deploys the story to dramatize conflict over the share of income going to capital versus labor. But it also illustrates dynamics of corruption. Margaret Kimberley of Black Agenda Report claims that the union involved was coopted thanks to the wealth of the man who once ran it.[49] The same dynamics shine through documentaries like Big Men (on Ghana), or the many nonfiction works on oil exploitation in Africa. [50]

    Piketty observes that “foreign companies and stockholders are at least as guilty as unscrupulous African elites” in promoting the “pillage” of the continent.[51] Consider the state of Equatorial Guinea, which struck oil in 1995. By 2006, Equatoguineans had the third highest per capita income in the world, higher than many prosperous European countries.[52] Yet the typical citizen remains very poor. [53]  In the middle of the oil boom, an international observer noted that “I was unable to see any improvements in the living standards of ordinary people. In 2005, nearly half of all children under five were malnourished,” and “[e]ven major cities lack[ed] clean water and basic sanitation.”[54] The government has not demonstrated that things have improved much since them, despite ample opportunity to do so.  Poorly paid soldiers routinely shake people down for bribes, and the country’s president, Teodoro Obiang, has paid Moroccan mercenaries for his own protection.  A 2009 book noted that tensions in the country had reached a boiling point, as the “local Bubi people of Malabo” felt “invaded” by oil interests, other regions were “abandoned,” and self-determination movements decried environmental and human rights abuses.[55]

    So who did benefit from Equatorial Guinea’s oil boom?  Multinational oil companies, to be sure, though we may never know exactly how much profit the country generated for them—their accounting was (and remains) opaque.  The Riggs Bank in Washington, D.C. gladly handled accounts of President Obiang, as he became very wealthy.  Though his salary was reported to be $60,000 a year, he had a net worth of roughly $600 million by 2011.[56] (Consider, too, that such a fortune would not even register on recent lists of the world’s 1,500 or so billionaires, and is barely more than 1/80th the wealth of a single Koch brother.) Most of the oil companies’ payments to him remain shrouded in secrecy, but a few came to light in the wake of US investigations.  For example, a US Senate report blasted him for personally taking $96 million of his nation’s $130 million in oil revenue in 1998, when a majority of his subjects were malnourished.[57]

    Obiang’s sordid record has provided a rare glimpse into some of the darkest corners of the global economy.  But his story is only the tip of an iceberg of a much vaster shadow economy of illicit financial flows, secrecy jurisdictions, and tax evasion. Obiang could afford to be sloppy: as the head of a sovereign state whose oil reserves gave it some geopolitical significance, he knew that powerful patrons could shield him from the fate of an ordinary looter.  Other members of the hectomillionaire class (and plenty of billionaires) take greater precautions.  They diversify their holdings into dozens or hundreds of entities, avoiding public scrutiny with shell companies and pliant private bankers.  A hidden hoard of tens of trillions of dollars has accumulated, and likely throws off hundreds of billions of dollars yearly in untaxed interest, dividends, and other returns.[58] This drives a wedge between a closed-circuit economy of extreme wealth and the ordinary patterns of exchange of the world’s less fortunate.[59]

    The Chinese writer and Nobel Peace Prize winner Liu Xiaobo once observed that corruption in Beijing had led to an officialization of the criminal and the criminalization of the official.[60] Persisting even in a world of brutal want and austerity-induced suffering, tax havenry epitomizes that sinister merger, and Piketty might have sharpened his critique further by focusing on this merger of politics and economics, of private gain and public governance. Authorities promote activities that would have once been proscribed; those who stand in the way of such “progress” might be jailed (or worse).  In Obiang’s Equatorial Guinea, we see similar dynamics, as the country’s leader extracts wealth at a volume that could only be dreamed of by a band of thieves.

    Obiang’s curiously double position, as Equatorial Guinea’s chief law maker and law breaker, reflects a deep reality of the global shadow economy.  And just as “shadow banks” are rivalling more regulated banks in terms of size and influence, shadow economy tactics are starting to overtake old standards. Tax avoidance techniques that were once condemned are becoming increasingly acceptable.  Campaigners like UK Uncut and the Tax Justice Network try to shame corporations for opportunistically allocating profits to low-tax jurisdictions.[61] But CEOs still brag about their corporate tax unit as a profit center.

    When some of Republican presidential candidate Mitt Romney’s recherché tax strategies were revealed in 2012, Barack Obama needled him repeatedly.  The charges scarcely stuck, as Romney’s core constituencies aimed to emulate rather than punish their standard-bearer.[62] Obama then appointed a Treasury Secretary (Jack Lew), who had himself utilized a Cayman Islands account.  Lew was the second Obama Treasury secretary to suffer tax troubles: Tim Geithner, his predecessor, was also accused of “forgetting” to pay certain taxes in a self-serving way.  And Obama’s billionaire Commerce Secretary Penny Pritzker was no stranger to complex tax avoidance strategies.[63]

    Tax attorneys may characterize Pritzker, Lew, Geithner, and Romney as different in kind from Obiang.  But any such distinctions they make will likely need to be moral, rather than legal, in nature.  Sure, these American elites operated within American law—but Obiang is the law of Equatorial Guinea, and could easily arrange for an administrative agency to bless his past actions (even developed legal systems permit retroactive rulemaking) or ensure the legality of all future actions (via safe harbors).  The mere fact that a tax avoidance scheme is “legal” should not count for much morally—particularly as those who gain from prior US tax tweaks use their fortunes to support the political candidacies of those who would further push the law in their favor.

    Shadowy financial flows exemplify the porous boundary between state and market.  The book Tax Havens: How Globalization Really Works argues that the line between savvy tax avoidance and illegal tax evasion (or strategic money transfers and forbidden money laundering) is blurring.[64] Between our stereotypical mental images of dishonest tycoons sipping margaritas under the palm trees of a Caribbean tax haven, and a state governor luring a firm by granting it a temporary tax abatement, lie hundreds of subtler scenarios.  Dingy rows of Delaware, Nevada, and Wyoming file cabinets can often accomplish the same purpose as incorporating in Belize or Panama: hiding the real beneficiaries of economic activity.[65] And as one wag put it to journalist Nicholas Shaxson, “the most important tax haven in the world is an island”—”Manhattan.”[66]

    In a world where “tax competition” is a key to neoliberal globalization, it is hard to see how a global wealth tax (even if set at the very low levels Piketty proposes) supports (rather than directly attacks) existing market order. Political elites are racing to reduce tax liability to curry favor with the wealthy companies and individuals they hope to lure, serve, and bill.  The ultimate logic of that competition is a world made over in the image of Obiang’s Equatorial Guinea: crumbling infrastructure and impoverished citizenries coexisting with extreme luxury for a global extractive elite and its local enablers.  Books like Third World America, Oligarchy, and Captive Audience have already started chronicling the failure of the US tax system to fund roads, bridges, universal broadband internet connectivity, and disaster preparation.[67] As tax avoiding elites parley their gains into lobbying for rules that make tax avoidance even easier, self-reinforcing inequality seems all but inevitable.  Wealthy interests can simply fund campaigns to reduce their taxes, or to reduce the risk of enforcement to a nullity. As Ben Kunkel pointedly asks, “How are the executive committees of the ruling class in countries across the world to act in concert to impose Piketty’s tax on just this class?”[68]

    US history is instructive here. Congress passed a tax on the top 0.1% of earners in 1894, only to see the Supreme Court strike the tax down in a five to four decision.  After the 16th Amendment effectively repealed that Supreme Court decision, Congress steadily increased the tax on high income households.  From 1915 to 1918, the highest rate went from 7% to 77%, and over fifty-six tax brackets were set.  When high taxes were maintained for the wealthy after the war, tax evasion flourished.  At this point, as Jeffrey Winters writes, the government had to choose whether to “beef up law enforcement against oligarchs … , or abandon the effort and instead squeeze the same resources from citizens with far less material clout to fight back.”[69] Enforcement ebbed and flowed. But since then, what began by targeting the very wealthy has grown to include “a mass tax that burdens oligarchs at the same effective rate as their office staff and landscapers.”[70]

    The undertaxation of America’s wealthy has helped them capture key political processes, and in turn demand even less taxation.  The dynamic of circularity teaches us that there is no stable, static equilibrium to be achieved between regulators and regulated. The government is either pushing industry to realize some public values in its activities (say, by investing in sustainable growth), or industry is pushing its regulators to promote its own interests.[71] Piketty may worry that, if he too easily accepts this core tenet of politico-economic interdependence, he’ll be dismissed as a statist socialist. But until political economists do so, their work cannot do justice to the voices of those prematurely dead as a result of the relentless pursuit of profit—ranging from the Lonmin miners, to those crushed at Rana Plaza, to the spike of suicides provoked by European austerity and Indian microcredit gone wrong, to the thousands of Americans who will die early because they are stuck in states that refuse to expand Medicaid.[72] Contemporary political economy can only mature if capitalism’s ghosts constrain our theory and practice as pervasively as communism’s specter does.

    Renewing Political Economy

    Piketty has been compared to Alexis de Tocqueville: a French outsider capable of discerning truths about the United States that its own sages were too close to observe.  The function social equality played in Tocqueville’s analysis, is taken up by economic inequality in Piketty’s:  a set of self-reinforcing trends fundamentally reshaping the social order.[73] I’ve written tens of thousands of words on this inequality, but the verbal itself may be outmatched in the face of the numbers and force behind these trends.[74] As film director Alex Rivera puts it, in an interview with The New Inquiry:

    I don’t think we even have the vocabulary to talk about what we lose as contemporary virtualized capitalism produces these new disembodied labor relations. … The broad, hegemonic clarity is the knowledge that a capitalist enterprise has the right to seek out the cheapest wage and the right to configure itself globally to find it. … The next stage in this process…is for capital to configure itself to enable every single job to be put on the global market through the network.[75]

    Amazon’s “Mechanical Turk” has begun that process, supplying “turkers” to perform tasks at a penny each.[76] Uber, Lyft, TaskRabbit, and various “gig economy” imitators assure that micro-labor is on the rise, leaving micro-wages in its wake.[77] Workers are shifting from paid vacation to stay-cation to “nano-cation” to “paid time off” to hoarding hours to cover the dry spells when work disappears.[78] These developments are all predictable consequences of a globalization premised on maximizing finance rents, top manager compensation, and returns to shareholders.

    Inequality is becoming more outrageous than even caricaturists used to dare. The richest woman in the world (Gina Rinehart) has advised fellow Australians to temper their wage demands, given that they are competing against Africans willing to work for two dollars day.[79] Or consider the construct of Dogland, from Korzeniewicz and Moran’s 2009 book, Unveiling Inequality:

    The magnitude of global disparities can be illustrated by considering the life of dogs in the United States. According to a recent estimate … in 2007-2008 the average yearly expenses associated with owning a dog were $1425 … For sake of argument, let us pretend that these dogs in the US constitute their own nation, Dogland, with their average maintenance costs representing the average income of this nation of dogs.

    By such a standard, their income would place Dogland squarely as a middle-income nation, above countries such as Paraguay and Egypt. In fact, the income of Dogland would place its canine inhabitants above more than 40% of the world population. … And if we were to focus exclusively on health care expenditures, the gap becomes monumental: the average yearly expenditures in Dogland would be higher than health care expenditures in countries that account for over 80% of the world population.[80]

    Given disparities like this, wages cannot possibly reflect just desert: who can really argue that a basset hound, however adorable, has “earned” more than a Bangladeshi laborer? Cambridge economist Ha Joon Chang asks us to compare the job and the pay of transport workers in Stockholm and Calcutta. “Skill” has little to do with it. The former, drivers on clean and well-kept roads, may easily be paid fifty times more than the latter, who may well be engaged in backbreaking, and very skilled, labor to negotiate passengers among teeming pedestrians, motorbikes, trucks, and cars.[81]

    Once “skill-biased technological change” is taken off the table, the classic economic rationale for such differentials focuses on the incentives necessary to induce labor. In Sweden, for example, the government assures that a person is unlikely to starve, no matter how many hours a week he or she works. By contrast, in India, 42% of the children under five years old are malnourished.[82] So while it takes $15 or $20 an hour just to get the Swedish worker to show up, the typical Indian can be motivated to labor for much less. But of course, at this point the market rationale for the wage differential breaks down entirely, because the background set of social expectations of earnings absent work is epiphenomenal of state-guaranteed patterns of social insurance. The critical questions are: how did the Swedes generate adequate goods and services for their population, and the social commitment to redistribution necessary in order to assure that unemployment is not a death sentence? And how can such social arrangements create basic entitlements to food, housing, health care, and education, around the world?

    Piketty’s proposals for regulating capital would be more compelling if they attempted to answer questions like those, rather than focusing on the dry, technocratic aim of tax-driven wealth redistribution. Moreover, even within the realm of tax law and policy, Piketty will need to grapple with several enforcement challenges if a global wealth tax is to succeed. But to its great credit, Capital adopts a methodology capacious enough to welcome the contributions of legal academics and a broad range of social scientists to the study (and remediation) of inequality.[83] It is now up to us to accept the invitation, realizing that if we refuse, accelerating inequality will undermine the relevance—and perhaps even the very existence—of independent legal authority.


    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His forthcoming book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project.

    Back to the essay
    _____

    [1] Dennis Abrams, Piketty’s “Capital”: A Monster Hit for Harvard U Press, Publishing Perspectives, at http://publishingperspectives.com/2014/04/pilkettys-capital-a-monster-hit-for-harvard-u-press/ (Apr. 29, 2014).

    [2] Intriguingly, one leading economist who has done serious work on narrative in the field, Dierdre McCloskey, offers a radically different (and far more positive) perspective on the nature of economic growth under capitalism. Evan Thomas, Has Thomas Piketty Met His Match?, http://www.spectator.co.uk/features/9211721/unequal-battle/. But this is to be expected as richer methodologies inform economic analysis. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. Rather than trying to bury normative differences in jargon or flatten them into commensurable cost-benefit calculations, it surfaces them.

    [3] As Thomas Jessen Adams argues, “to understand how inequality has been overcome in the past, we must understand it historically.” Adams, The Theater of Inequality, at http://nonsite.org/feature/the-theater-of-inequality. Adams critiques Piketty for failing to engage historical evidence properly. In this review, I celebrate the book’s bricolage of methodological approaches as the type of problem-driven research promoted by Ian Shapiro.

    [4] Thomas Piketty, Capital in the Twenty-First Century 17 (Arthur Goldhammer trans., 2014).

    [5] Doug Henwood, The Top of the World, Book Forum, Apr. 2014,  http://www.bookforum.com/inprint/021_01/12987; Suresh Naidu, Capital Eats the World, Jacobin (May 30, 2014), https://www.jacobinmag.com/2014/05/capital-eats-the-world/.

    [6] Thomas Piketty, Capital in the Twenty-First Century 25 (Arthur Goldhammer trans., 2014).

    [7] Id.

    [8] As Piketty observes, war and revolution can also serve this redistributive function. Piketty, supra n. 3, at 20. Since I (and the vast majority of attorneys) do not consider violence a legitimate tool of social change, I do not include these options in my discussion of Piketty’s book.

    [9] Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals of Health Law 269 (2010).

    [10] Charles R. Morris, The Two Trillion Dollar Meltdown: Easy Money, High Rollers, and the Great Credit Crash 139-40 (2009); see also Edward N. Wolff, Top Heavy: The Increasing Inequality of Wealth in America and What Can Be Done About It 36 (updated ed. 2002).

    [11] Yves Smith, Yes, Virginia, the Rich Continue to Get Richer: The Top 1% Get 121% of Income Gains Since 2009, Naked Capitalism (Feb. 13, 2013), http://www.nakedcapitalism.com/2013/02/yes-virginia-the-rich-continue-to-get-richer-the-1-got-121-of-income-gains-since-2009.html#XxsV2mERu5CyQaGE.99.

    [12] Larry M. Bartels, Unequal Democracy: The Political Economy of the New Gilded Age 8,10 (2010).

    [13] Id. at 8.

    [14] Id. at 10.

    [15] Tom Herman, There’s Rich, and There’s the ‘Fortunate 400′, Wall St. J., Mar. 5, 2008, http://online.wsj.com/article/SB120468366051012473.html.

    [16] See Thomas Piketty & Emmanuel Saez, The Evolution of Top Incomes: A Historical and International Perspective, 96 Am. Econ. Rev. 200, 204 (2006). 

    [17] Piketty, supra note 4, at 17. Note that, given variations in the data, Piketty is careful to cabin the “geographical and historical boundaries of this study” (27), and must “focus primarily on the wealthy countries and proceed by extrapolation to poor and emerging countries” (28).

    [18] Id. at 46, 571 (“In this book, capital is defined as the sum total of nonhuman assets that can be owned and exchanged on some market. Capital includes all forms of real property (including residential real estate) as well as financial and professional capital (plants, infrastructure, machinery, patents, and so on) used by firms and government agencies.”).

    [19] Alice Schroeder, The Snowball: Warren Buffett and the Business of Life (Bantam-Dell, 2008); Adam Levine-Weinberg, Warren Buffett Loves a Good Moat, at http://www.fool.com/investing/general/2014/06/30/warren-buffett-loves-a-good-moat.aspx.

    [20] John Rawls, A Theory of Justice (1971).

    [21] Piketty, supra note 4, at 540.

    [22] Atul Gawande, Something Wicked This Way Comes, New Yorker (June 28, 2012), http://www.newyorker.com/news/daily-comment/something-wicked-this-way-comes.

    [23] Philip Mirowski, Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown (2013).

    [24] The Foreign Account Tax Compliance Act (FATCA) was passed in 2010 as part of the Hiring Incentives to Restore Employment Act, Pub. L. No. 111-147, 124 Stat. 71 (2010), codified in sections 1471 to 1474 of the Internal Revenue Code, 26 U.S.C. §§ 1471-1474.  The law is effective as of 2014. It requires foreign financial institutions (FFIs) to report financial information about accounts held by United States persons, or pay a withholding tax. Id.

    [25] Christopher William Sanchirico, Deconstructing the New Efficiency Rationale, 86 Cornell L. Rev. 1003, 1005 (2001).

    [26] Nicholas Shaxson, Treasure Islands: Uncovering the Damage of Offshore Banking and Tax Havens (2012); Jeanna Smialek, The 1% May be Richer than You Think, Bloomberg, Aug. 7, 2014, at http://www.bloomberg.com/news/2014-08-06/the-1-may-be-richer-than-you-think-research-shows.html (collecting economics research).

    [27] Andrew Rice, Stash Pad: The New York real-estate market is now the premier destination for wealthy foreigners with rubles, yuan, and dollars to hide, N.Y. Mag., June 29, 2014, at http://nymag.com/news/features/foreigners-hiding-money-new-york-real-estate-2014-6/#.

    [28] Ronen Palan, Richard Murphy, and Christian Chavagneux, Tax Havens: How Globalization Really Works 272 (2009) (“[m]ore than simple conduits for tax avoidance and evasion, tax havens actually belong to the broad world of finance, to the business of managing the monetary resources of individuals, organizations, and countries.  They have become among the most powerful instruments of globalization, one of the principal causes of global financial instability, and one of the large political issues of our times.”).

    [29] 26 U.S.C. § 1471-1474 (2012); Itai Grinberg, Beyond FATCA: An Evolutionary Moment for the International Tax System (Georgetown Law Faculty, Working Paper No. 160, 2012), available at http://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=1162&context=fwps_papers.

    [30] David Rothkopf, Superclass: The Global Power Elite and the World They Are Making (2009).

    [31] John Chung, Money as Simulacrum: The Legal Nature and Reality of Money, 5 Hasting Bus. L.J. 109,149 (2009).

    [32] James S. Henry, Tax Just. Network, The Price Of Offshore Revisited: New Estimates For “Missing” Global Private Wealth, Income, Inequality, And Lost Taxes 3 (2012), available at http://www.taxjustice.net/cms/upload/pdf/Price_of_Offshore_Revisited_120722.pdf; Scott Highman et al., Piercing the Secrecy of Offshore Tax Havens, Wash. Post (Apr. 6, 2013), http://www.washingtonpost.com/investigations/piercing-the-secrecy-of-offshore-tax-havens/2013/04/06/1551806c-7d50-11e2-a044-676856536b40_story.html.

    [33] Dev Kar & Devon Cartwright‐Smith, Center for Int’l Pol’y, Illicit Financial Flows from Developing Countries: 2002-2006 (2012); Jeffrey Sachs, The End of Poverty: Economic Possibilities for Our Time (2006); Ben Harack, How Much Would it Cost to End Extreme Poverty in the World?, Vision Earth, (Aug. 26, 2011), http://www.visionofearth.org/economics/ending-poverty/how-much-would-it-cost-to-end-extreme-poverty-in-the-world/.

    [34] Henry, supra note 68.

    [35] Piketty, supra note 4, at 523.

    [36] Jeffrey Winters coined the term “wealth defense industry” in his book, Oligarchy. See Frank Pasquale, Understanding Wealth Defense: Direct Action from the 0.1%, at http://www.concurringopinions.com/archives/2011/11/understanding-wealth-defense-direct-action-from-the-0-1.html.

    [37] For a similar argument, focusing on the historical specificity of the US parallel to the trente glorieuses, see  Thomas Jessen Adams, The Theater of Inequality, http://nonsite.org/feature/the-theater-of-inequality.

    [38] Thomas Pogge, The Health Impact Fund: Boosting Pharmaceutical Innovation Without Obstructing Free Access, 18 Cambridge Q. Healthcare Ethics 78 (2008) (proposing global R&D  fund);William Fisher III, Promise to Keep: Technology, Law, and the Future of Entertainment (2007); William W. Fisher & Talha Syed, Global Justice in Healthcare: Developing Drugs for the Developing World, 40 U.C. Davis L. Rev. 581 (2006).

    [39] Katharina Pistor, A Legal Theory of Finance, 41 J. Comp. Econ. 315 (2013); Law in Finance, 41 J. Comp. Econ (2013). Several other articles in the same journal issue discuss the implications of LTF for derivatives, foreign currency exchange, and central banking.

    [40] University of Chicago Law Professor Eric A. Posner and economist Glen Weyl recognize this in their review of Piketty, arguing that “the fundamental problem facing American capitalism is not the high rate of return on capital relative to economic growth that Piketty highlights, but the radical deviation from the just rewards of the marketplace that have crept into our society and increasingly drives talented students out of innovation and into finance.”  Posner & Weyl, Thomas Piketty Is Wrong: America Will Never Look Like a Jane Austen Novel, The New Republic, July 31, 2014, at http://www.newrepublic.com/article/118925/pikettys-capital-theory-misunderstands-inherited-wealth-today. See also Timothy A. Canova, The Federal Reserve We Need, 21 American Prospect 9 (October 2010), at http://prospect.org/article/federal-reserve-we-need.

    [41] Timothy Canova, The Federal Reserve We Need: It’s the Fed We Once Had, at http://prospect.org/article/federal-reserve-we-need; Justin Fox, How Economics PhDs Took Over the Federal Reserve, at http://blogs.hbr.org/2014/02/how-economics-phds-took-over-the-federal-reserve/.

    [42] Jack M. Balkin, From Off the Wall to On the Wall: How the Mandate Challenge Went Mainstream, Atlantic (June 4, 2012, 2:55 PM), http://www.theatlantic.com/national/archive/2012/06/from-off-the-wall-to-on-the-wall-how-the-mandate-challenge-went-mainstream/258040/ (Jack Balkin has described how certain arguments go from being ‘off the wall‘ to respectable in constitutional thought; economists have yet to take up that deflationary nomenclature for the evolution of ideas in their own field’s intellectual history. That helps explain the rising power of economists vis a vis lawyers, since the latter field’s honesty about the vagaries of its development diminishes its authority as a ‘science.’).  For more on the political consequences of the philosophy of social science, see Jamie Cohen-Cole, The Open Mind: Cold War Politics and the Sciences of Human Nature (2014), and Joel Isaac, Working Knowledge: Making the Human Sciences from Parsons to Kuhn (2012).

    [43] Chris Giles, Piketty Findings Undercut by Errors, Fin. Times (May 23, 2014, 7:00 PM), http://www.ft.com/intl/cms/s/2/e1f343ca-e281-11e3-89fd-00144feabdc0.html#axzz399nSmEKj; Thomas Piketty, Addendum: Response to FT, Thomas Piketty (May 28, 2014), http://piketty.pse.ens.fr/files/capital21c/en/Piketty2014TechnicalAppendixResponsetoFT.pdf; Felix Salmon, The Piketty Pessimist, Reuters (April 25, 2014), http://blogs.reuters.com/felix-salmon/2014/04/25/the-piketty-pessimist/.

    [44] Neil Irwin, Everything You Need to know About Thomas Piketty vs. The Financial Times, N.Y. Times (May 30, 2014), http://www.nytimes.com/2014/05/31/upshot/everything-you-need-to-know-about-thomas-piketty-vs-the-financial-times.html

    [45] Javier Blas, The Fragile Middle: Rising Inequality in Africa Weighs on New Consumers, Fin. Times (Apr. 18, 2014), http://www.ft.com/intl/cms/s/0/49812cde-c566-11e3-89a9-00144feabdc0.html#axzz399nSmEKj.

    [46] Jane Owen, Duke of Grafton Uses R&B to Restore Euston Hall’s Pleasure Grounds, Fin. Times (Apr. 18, 2014, 2:03 PM), http://www.ft.com/intl/cms/s/2/b49f6dd8-c3bc-11e3-870b-00144feabdc0.html#slide0.

    [47] Larry Elliott, Britain’s Five Richest Families Worth More Than Poorest 20%, Guardian, Mar. 16, 2014, http://www.theguardian.com/business/2014/mar/17/oxfam-report-scale-britain-growing-financial-inequality#101.

    [48] Piketty, supra note 4, at 570.

    [49] Margaret Kimberley, Freedom Rider: Miners Shot Down, Black Agenda Report (June 4, 2014), http://www.blackagendareport.com/content/freedom-rider-miners-shot-down.

    [50] Peter Maass, Crude World: The Violent Twilight of Oil (2009); Nicholas Shaxson, Poisoned Wells: The Dirty Politics of African Oil (2008).

    [51] Piketty, supra note 4, at 539.

    [52] Jad Mouawad, Oil Corruption in Equatorial Guinea, N.Y. Times Green Blog (July 9, 2009, 7:01 AM), http://green.blogs.nytimes.com/2009/07/09/oil-corruption-in-equatorial-guinea; Tina Aridas & Valentina Pasquali, Countries with the Highest GDP Average Growth, 2003–2013, Global Fin. (Mar. 7, 2013), http://www.gfmag.com/component/content/article/119-economic-data/12368-countries-highest-gdp-growth.html#axzz2W8zLMznX; CIA, The World Factbook 184 (2007).

    [53] Interview with President Teodoro Obiang of Equatorial Guinea, CNN’s Amanpour (CNN broadcast Oct. 5, 2012), transcript available at http://edition.cnn.com/TRANSCRIPTS/1210/05/ampr.01.html.

    [54] Peter Maass, A Touch of Crude, Mother Jones, Jan. 2005,http://www.motherjones.com/politics/2005/01/obiang-equatorial-guinea-oil-riggs.

    [55] Geraud Magrin & Geert van Vliet, The Use of Oil Revenues in Africa, in Governance of Oil in Africa: Unfinished Business 114 (Jacques Lesourne ed., 2009).

    [56] Interview with President Teodoro Obiang of Equatorial Guinea, supra note 89 .

    [57] S. Minority Staff of Permanent Subcomm. on Investigations, Comm. on Gov’t Affairs, 108th Cong., Rep. on Money Laundering and Foreign Corruption: Enforcement and Effectiveness of the Patriot Act 39-40 (Subcomm. Print 2004).

    [58] Henry, supra note 68 , at 6, 19-20.

    [59] Frank Pasquale, Closed Circuit Economics, New City Reader, Dec. 3, 2010, at 3, at http://neildonnelly.net/ncr/08_Business/NCR_Business_%5BF%5D_web.pdf.

    [60] Liu Xiaobo, No Enemies, No Hatred 102 (Perry Link, trans., 2012).

    [61] Jesse Drucker, Occupy Wall Street Stylists Pursue U.K. Tax Dodgers, Bloomberg News (June 11, 2013), http://www.businessweek.com/news/2013-06-11/occupy-wall-street-stylists-pursue-u-dot-k-dot-tax-dodgers.

    [62] Daniel J. Mitchell, Tax Havens Should Be Emulated, Not Prosecuted, CATO Inst. (Apr. 13, 2009, 12:36 PM), http://www.cato.org/blog/tax-havens-should-be-emulated-not-prosecuted.

    [63] Janet Novack, Pritzker Family Baggage: Tax Saving Offshore Trusts, Forbes (May 2, 2013, 8:20 PM), http://www.forbes.com/sites/janetnovack/2013/05/02/pritzker-family-baggage-tax-saving-offshore-trusts/.

    [64] Ronen Palan et al., Tax Havens: How Globalization Really Works (2013); see also Carolyn Nordstrom, Global Outlaws: Crime, Money, and Power in the Contemporary World (2007), and Loretta Napoleoni, Rogue Economics (2009).

    [65] Palan et al., supra note 100 .

    [66] Shaxson, supra note 86 , at 24.

    [67] Arianna Huffington, Third World America: How Our Politicians Are Abandoning the Middle Class and Betraying the American Dream (2011); Jeffrey A. Winters, Oligarchy (2011); Susan B. Crawford, Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age (2014).

    [68] Benjamin Kunkel, Paupers and Richlings, 36 London Rev. Books 17 (2014) (reviewing Thomas Piketty, Capital in the Twenty-First Century).

    [69] Jeffrey A. Winters, Oligarchy and Democracy, Am. Interest, Sept. 28, 2011, http://www.the-american-interest.com/articles/2011/9/28/oligarchy-and-democracy/.

    [70] Id.

    [71]  James K. Galbraith, The Predator State: How Conservatives Abandoned the Free Market and Why Liberals Should, Too (2009).

    [72] Alex Duval Smith, South Africa Lonmin Mine Massacre Puts Nationalism Back on Agenda, Guardian (Aug. 29, 2012), http://www.theguardian.com/global-development/poverty-matters/2012/aug/29/south-africa-lonmin-mine-massacre-nationalisation; Charlie Campbell, Dying for Some New Clothes: Bangladesh’s Rana Plaza Tragedy, Time (Apr. 26, 2013), http://world.time.com/2013/04/26/dying-for-some-new-clothes-the-tragedy-of-rana-plaza/; David Stuckler, The Body Economic: Why Austerity Kills xiv (2013); Soutik Biswas, India’s Micro-Finance Suicide Epidemic, BBC (Dec. 16, 2010), http://www.bbc.com/news/world-south-asia-11997571; Michael P. O’Donnell, Further Erosion of Our Moral Compass: Failure to Expand Medicaid to Low-Income People in All States, 28 Am. J. Health Promotion iv (2013); Sam Dickman et al., Opting Out of Medicaid Expansion; The Health and Financial Impacts, Health Affairs Blog (Jan. 30, 2014), http://healthaffairs.org/blog/2014/01/30/opting-out-of-medicaid-expansion-the-health-and-financial-impacts/.

    [73] It would be instructive to compare political theorists’ varying models of Tocqueville’s predictive efforts, with Piketty’s sweeping r > g.  See, e.g., Roger Boesche, Why Could Tocqueville Predict So Well?, 11 Political Theory 79 (1983) (“Democracy in America endeavors to demonstrate how language, literature, the relations of masters and servants, the status of women, the family,  property, politics, and so forth, must change and align themselves in a new, symbiotic configuration as a result of the historical thrust toward equality”); Jon Elster, Alexis de Tocqueville:  the First Social Scientist (2012).

    [74] See, e.g., Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals of Health Law 269 (2010); Frank Pasquale, The Cost of Conscience: Quantifying our Charitable Burden in an Era of Globalization, at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=584741 (2004); Frank Pasquale, Diagnosing Finance’s Failures: From Economic Idealism to Lawyerly Realism, 6 India L. J. 2 (2012).

    [75] Malcolm Harris interview of Alex Rivera, Border Control, New Inquiry (July 2, 2012), http://thenewinquiry.com/features/border-control/.

    [76] Trebor Scholz, Digital Labor (Palgrave, forthcoming, 2015); Frank Pasquale, Banana Republic.com, Jotwell (Jan. 14, 2011), http://cyber.jotwell.com/banana-republic-com/.

    [77] The Rise of Micro-Labor, On Point with Tom Ashbrook (NPR Apr. 3, 2012, 10:00 AM), http://onpoint.wbur.org/2012/04/03/micro-labor-websites.

    [78] Vacation Time, On Point with Tom Ashbrook (NPR June 22, 2012, 10:00 AM), http://onpoint.wbur.org/2012/06/22/vacation-time.

    [79] Peter Ryan, Aussies Must Compete with $2 a Day Workers: Rinehart, ABC News (Sept. 25, 2012, 2:56 PM), http://www.abc.net.au/news/2012-09-05/rinehart-says-aussie-workers-overpaid-unproductive/4243866.

    [80] Roberto Patricio Korzeniewicz & Timothy Patrick Moran, Unveiling Inequality, at xv (2012).

    [81] Ha Joon Chang, 23 Things They Don’t Tell You About Capitalism 98 (2012).

    [82] Jason Burke, Over 40% of Indian Children Are Malnourished, Report Finds, Guardian (Jan. 10, 2012), http://www.theguardian.com/world/2012/jan/10/child-malnutrition-india-national-shame.

    [83] Paul Farmer observes that “an understanding of poverty must be linked to efforts to end it.” Farmer, In the Company of the Poor, at http://www.pih.org/blog/in-the-company-of-the-poor.  The same could be said of extreme inequality.