b2o: boundary 2 online

Tag: capitalism

  • How We Think About Technology (Without Thinking About Politics)

    How We Think About Technology (Without Thinking About Politics)

    N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)a review of N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)
    by R. Joshua Scannell

    ~

    In How We Think, N Katherine Hayles addresses a number of increasingly urgent problems facing both the humanities in general and scholars of digital culture in particular. In keeping with the research interests she has explored at least since 2002’s Writing Machines (MIT Press), Hayles examines the intersection of digital technologies and humanities practice to argue that contemporary transformations in the orientation of the University (and elsewhere) are attributable to shifts that ubiquitous digital culture have engendered in embodied cognition. She calls this process of mutual evolution between the computer and the human technogenesis (a term that is mostly widely associated with the work of Bernard Stiegler, although Hayles’s theories often aim in a different direction from Stiegler’s). Hayles argues that technogenesis is the basis for the reorientation of the academy, including students, away from established humanistic practices like close reading. Put another way, not only have we become posthuman (as Hayles discusses in her landmark 1999 University of Chicago Press book, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics), but our brains have begun to evolve to think with computers specifically and digital media generally. Rather than a rearguard eulogy for the humanities that was, Hayles advocates for an opening of the humanities to digital dromology; she sees the Digital Humanities as a particularly fertile ground from which to reimagine the humanities generally.

    Hayles is an exceptional scholar, and while her theory of technogenesis is not particularly novel, she articulates it with a clarity and elegance that are welcome and useful in a field that is often cluttered with good ideas, unintelligibly argued. Her close engagement with work across a range of disciplines – from Hegelian philosophy of mind (Catherine Malabou) to theories of semiosis and new media (Lev Manovich) to experimental literary production – grounds an argument about the necessity of transmedial engagement in an effective praxis. Moreover, she ably shifts generic gears over the course of a relatively short manuscript, moving from quasi-ethnographic engagement with University administrators, to media archaeology a la Friedrich Kittler, to contemporary literary theory, with grace. Her critique of the humanities that is, therefore, doubles as a praxis: she is actually producing the discipline-flouting work that she calls on her colleagues to pursue.

    The debate about the death and/or future of the humanities is weather worn, but Hayles’s theory of technogenesis as a platform for engaging in it is a welcome change. For Hayles, the technogenetic argument centers on temporality, and the multiple temporalities embedded in computer processing and human experience. She envisions this relation as cybernetic, in which computer and human are integrated as a system through the feedback loops of their coemergent temporalities. So, computers speed up human responses, which lag behind innovations, which prompt beta test cycles at quicker rates, which demand humans to behave affectively, nonconsciously. The recursive relationship between human duration and machine temporality effectively mutates both. Humanities professors might complain that their students cannot read “closely” like they used to, but for Hayles this is a fault of those disciplines to imagine methods in step with technological changes. Instead of digital media making us “dumber” by reducing our attention spans, as Nicholas Carr argues, Hayles claims that the movement towards what she calls “hyper reading” is an ontological and biological fact of embodied cognition in the age of digital media. If “how we think” were posed as a question, the answer would be: bodily, quickly, cursorily, affectively, non-consciously.

    Hayles argues that this doesn’t imply an eliminative teleology of human capacity, but rather an opportunity to think through novel, expansive interventions into this cyborg loop. We may be thinking (and feeling, and experiencing) differently than we used to, but this remains a fact of human existence. Digital media has shifted the ontics of our technogenetic reality, but it has not fundamentally altered its ontology. Morphological biology, in fact, entails ontological stability. To be human, and to think like one, is to be with machines, and to think with them. The kids, in other words, are all right.

    This sort of quasi-Derridean or Stieglerian Hegelianism is obviously not uncommon in media theory. As Hayles deploys it, this disposition provides a powerful framework for thinking through the relationship of humans and machines without ontological reductivism on either end. Moreover, she engages this theory in a resolutely material fashion, evading the enervating tendency of many theorists in the humanities to reduce actually existing material processes to metaphor and semiosis. Her engagement with Malabou’s work on brain plasticity is particularly useful here. Malabou has argued that the choice facing the intellectual in the age of contemporary capitalism is between plasticity and self-fashioning. Plasticity is a quintessential demand of contemporary capitalism, whereas self-fashioning opens up radical possibilities for intervention. The distinction between these two potentialities, however, is unclear – and therefore demands an ideological commitment to the latter. Hayles is right to point out that this dialectic insufficiently accounts for the myriad ways in which we are engaged with media, and are in fact produced, bodily, by it.

    But while Hayles’ critique is compelling, the responses she posits may be less so. Against what she sees as Malabou’s snide rejection of the potential of media, she argues

    It is precisely because contemporary technogenesis posits a strong connection between ongoing dynamic adaptation of technics and humans that multiple points of intervention open up. These include making new media…adapting present media to subversive ends…using digital media to reenvision academic practices, environments and strategies…and crafting reflexive representations of media self fashionings…that call attention to their own status as media, in the process raising our awareness of both the possibilities and dangers of such self-fashioning. (83)

    With the exception of the ambiguous labor done by the word “subversive,” this reads like a catalog of demands made by administrators seeking to offload ever-greater numbers of students into MOOCs. This is unfortunately indicative of what is, throughout the book, a basic failure to engage with the political economics of “digital media and contemporary technogenesis.” Not every book must explicitly be political, and there is little more ponderous than the obligatory, token consideration of “the political” that so many media scholars feel compelled to make. And yet, this is a text that claims to explain “how” “we” “think” under post-industrial, cognitive capitalism, and so the lack of this engagement cannot help but show.

    Universities across the country are collapsing due to lack of funding, students are practically reduced to debt bondage to cope with the costs of a desperately near-compulsory higher education that fails to deliver economic promises, “disruptive” deployment of digital media has conjured teratic corporate behemoths that all presume to “make the world a better place” on the backs of extraordinarily exploited workforces. There is no way for an account of the relationship between the human and the digital in this capitalist context not to be political. Given the general failure of the book to take these issues seriously, it is unsurprising that two of Hayles’ central suggestions for addressing the crisis in the humanities are 1) to use voluntary, hobbyist labor to do the intensive research that will serve as the data pool for digital humanities scholars and 2) to increasingly develop University partnerships with major digital conglomerates like Google.

    This reads like a cost-cutting administrator’s fever dream because, in the chapter in which Hayles promulgates novel (one might say “disruptive”) ideas for how best to move the humanities forward, she only speaks to administrators. There is no consideration of labor in this call for the reformation of the humanities. Given the enormous amount of writing that has been done on affective capitalism (Clough 2008), digital labor (Scholz 2012), emotional labor (Van Kleaf 2015), and so many other iterations of exploitation under digital capitalism, it boggles the mind a bit to see an embrace of the Mechanical Turk as a model for the future university.

    While it may be true that humanities education is in crisis – that it lacks funding, that its methods don’t connect with students, that it increasingly must justify its existence on economic grounds – it is unclear that any of these aspects of the crisis are attributable to a lack of engagement with the potentials of digital media, or the recognition that humans are evolving with our computers. All of these crises are just as plausibly attributable to what, among many others, Chandra Mohanty identified ten years ago as the emergence of the corporate university, and the concomitant transformation of the mission of the university from one of fostering democratic discourse to one of maximizing capital (Mohanty 2003). In other words, we might as easily attribute the crisis to the tightening command that contemporary capitalist institutions have over the logic of the university.

    Humanities departments are underfunded precisely because they cannot – almost by definition – justify their existence on monetary grounds. When students are not only acculturated, but are compelled by financial realities and debt, to understand the university as a credentialing institution capable of guaranteeing certain baseline waged occupations – then it is no surprise that they are uninterested in “close reading” of texts. Or, rather, it might be true that students’ “hyperreading” is a consequence of their cognitive evolution with machines. But it is also just as plausibly a consequence of the fact that students often are working full time jobs while taking on full time (or more) course loads. They do not have the time or inclination to read long, difficult texts closely. They do not have the time or inclination because of the consolidating paradigm around what labor, and particularly their labor, is worth. Why pay for a researcher when you can get a hobbyist to do it for free? Why pay for a humanities line when Google and Wikipedia can deliver everything an institution might need to know?

    In a political economy in which Amazon’s reduction of human employees to algorithmically-managed meat wagons is increasingly diagrammatic and “innovative” in industries from service to criminal justice to education, the proposals Hayles is making to ensure the future of the university seem more fifth columnary that emancipatory.

    This stance also evacuates much-needed context from what are otherwise thoroughly interesting, well-crafted arguments. This is particularly true of How We Think’s engagement with Lev Manovich’s claims regarding narrative and database. Speaking reductively, in The Language of New Media (MIT Press, 2001), Manovich argued that under there are two major communicative forms: narrative and database. Narrative, in his telling, is more or less linear, and dependent on human agency to be sensible. Novels and films, despite many modernist efforts to subvert this, tend toward narrative. The database, as opposed to the narrative, arranges information according to patterns, and does not depend on a diachronic point-to-point communicative flow to be intelligible. Rather, the database exists in multiple temporalities, with the accumulation of data for rhizomatic recall of seemingly unrelated information producing improbable patterns of knowledge production. Historically, he argues, narrative has dominated. But with the increasing digitization of cultural output, the database will more and more replace narrative.

    Manovich’s dichotomy of media has been both influential and roundly criticized (not least by Manovich himself in Software Takes Command, Bloomsbury 2013) Hayles convincingly takes it to task for being reductive and instituting a teleology of cultural forms that isn’t borne out by cultural practice. Narrative, obviously, hasn’t gone anywhere. Hayles extends this critique by considering the distinctive ways space and time are mobilized by database and narrative formations. Databases, she argues, depend on interoperability between different software platforms that need to access the stored information. In the case of geographical information services and global positioning services, this interoperability depends on some sort of universal standard against which all information can be measured. Thus, Cartesian space and time are inevitably inserted into database logics, depriving them of the capacity for liveliness. That is to say that the need to standardize the units that measure space and time in machine-readable databases imposes a conceptual grid on the world that is creatively limiting. Narrative, on the other hand, does not depend on interoperability, and therefore does not have an absolute referent against which it must make itself intelligible. Given this, it is capable of complex and variegated temporalities not available to databases. Databases, she concludes, can only operate within spatial parameters, while narrative can represent time in different, more creative ways.

    As an expansion and corrective to Manovich, this argument is compelling. Displacing his teleology and infusing it with a critique of the spatio-temporal work of database technologies and their organization of cultural knowledge is crucial. Hayles bases her claim on a detailed and fascinating comparison between the coding requirements of relational databanks and object-oriented databanks. But, somewhat surprisingly, she takes these different programming language models and metonymizes them as social realities. Temporality in the construction of objects transmutes into temporality as a philosophical category. It’s unclear how this leap holds without an attendant sociopolitical critique. But it is impossible to talk about the cultural logic of computation without talking about the social context in which this computation emerges. In other words, it is absolutely true that the “spatializing” techniques of coders (like clustering) render data points as spatial within the context of the data bank. But it is not an immediately logical leap to then claim that therefore databases as a cultural form are spatial and not temporal.

    Further, in the context of contemporary data science, Hayles’s claims about interoperability are at least somewhat puzzling. Interoperability and standardized referents might be a theoretical necessity for databases to be useful, but the ever-inflating markets around “big data,” data analytics, insights, overcoming data siloing, edge computing, etc, demonstrate quite categorically that interoperability-in-general is not only non-existent, but is productively non-existent. That is to say, there are enormous industries that have developed precisely around efforts to synthesize information generated and stored across non-interoperable datasets. Moreover, data analytics companies provide insights almost entirely based on their capacity to track improbably data patterns and resonances across unlikely temporalities.

    Far from a Cartesian world of absolute space and time, contemporary data science is a quite posthuman enterprise in committing machine learning to stretch, bend and strobe space and time in order to generate the possibility of bankable information. This is both theoretically true in the sense of setting algorithms to work sorting, sifting and analyzing truly incomprehensible amounts of data and materially true in the sense of the massive amount of capital and labor that is invested in building, powering, cooling, staffing and securing data centers. Moreover, the amount of data “in the cloud” has become so massive that analytics companies have quite literally reterritorialized information– particularly trades specializing in high frequency trading, which practice “co- location,” locating data centers geographically closer   the sites from which they will be accessed in order to maximize processing speed.

    Data science functions much like financial derivatives do (Martin 2015). Value in the present is hedged against the probable future spatiotemporal organization of software and material infrastructures capable of rendering a possibly profitable bundling of information in the immediate future. That may not be narrative, but it is certainly temporal. It is a temporality spurred by the queer fluxes of capital.

    All of which circles back to the title of the book. Hayles sets out to explain How We Think. A scholar with such an impeccable track record for pathbreaking analyses of the relationship of the human to technology is setting a high bar for herself with such a goal. In an era in which (in no small part due to her work) it is increasingly unclear who we are, what thinking is or how it happens, it may be an impossible bar to meet. Hayles does an admirable job of trying to inject new paradigms into a narrow academic debate about the future of the humanities. Ultimately, however, there is more resting on the question than the book can account for, not least the livelihoods and futures of her current and future colleagues.
    _____

    R Joshua Scannell is a PhD candidate in sociology at the CUNY Graduate Center. His current research looks at the political economic relations between predictive policing programs and urban informatics systems in New York City. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay
    _____

    Patricia T. Clough. 2008. “The Affective Turn.” Theory Culture and Society 25(1) 1-22

    N. Katherine Hayles. 2002. Writing Machines. Cambridge: MIT Press

    N. Katherine Hayles. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press

    Catherine Malabou. 2008. What Should We Do with Our Brain? New York: Fordham University Press

    Lev Manovich. 2001. The Language of New Media. Cambridge: MIT Press.

    Lev Manovich. 2009. Software Takes Command. London: Bloomsbury

    Randy Martin. 2015. Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia: Temple University Press

    Chandra Mohanty. 2003. Feminism Without Borders: Decolonizing Theory, Practicing Solidarity. Durham: Duke University Press.

    Trebor Scholz, ed. 2012. Digital Labor: The Internet as Playground and Factory. New York: Routledge

    Bernard Stiegler. 1998. Technics and Time, 1: The Fault of Epimetheus. Palo Alto: Stanford University Press

    Kara Van Cleaf. 2015. “Of Woman Born to Mommy Blogged: The Journey from the Personal as Political to the Personal as Commodity.” Women’s Studies Quarterly 43(3/4) 247-265

    Back to the essay

  • The Ground Beneath the Screens

    The Ground Beneath the Screens

    Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015)Jussi Parikka, The Anthrobscene (University of Minnesota Press, 2015)a review of Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015) and The Anthrobscene (University of Minnesota Press, 2015)
    by Zachary Loeb

    ~

     

     

     

     

    Despite the aura of ethereality that clings to the Internet, today’s technologies have not shed their material aspects. Digging into the materiality of such devices does much to trouble the adoring declarations of “The Internet Is the Answer.” What is unearthed by digging is the ecological and human destruction involved in the creation of the devices on which the Internet depends—a destruction that Jussi Parikka considers an obscenity at the core of contemporary media.

    Parikka’s tale begins deep below the Earth’s surface in deposits of a host of different minerals that are integral to the variety of devices without which you could not be reading these words on a screen. This story encompasses the labor conditions in which these minerals are extracted and eventually turned into finished devices, it tells of satellites, undersea cables, massive server farms, and it includes a dark premonition of the return to the Earth which will occur following the death (possibly a premature death due to planned obsolescence) of the screen at which you are currently looking.

    In a connected duo of new books, The Anthrobscene (referenced below as A) and A Geology of Media (referenced below as GM), media scholar Parikka wrestles with the materiality of the digital. Parikka examines the pathways by which planetary elements become technology, while considering the transformations entailed in the anthropocene, and artistic attempts to render all of this understandable. Drawing upon thinkers ranging from Lewis Mumford to Donna Haraway and from the Situationists to Siegfried Zielinski – Parikka constructs a way of approaching media that emphasizes that it is born of the Earth, borne upon the Earth, and fated eventually to return to its place of origin. Parikka’s work demands that materiality be taken seriously not only by those who study media but also by all of those who interact with media – it is a demand that the anthropocene must be made visible.

    Time is an important character in both The Anthrobscene and A Geology of Media for it provides the context in which one can understand the long history of the planet as well as the scale of the years required for media to truly decompose. Parikka argues that materiality needs to be considered beyond a simple focus upon machines and infrastructure, but instead should take into account “the idea of the earth, light, air, and time as media” (GM 3). Geology is harnessed as a method of ripping open the black box of technology and analyzing what the components inside are made of – copper, lithium, coltan, and so forth. The engagement with geological materiality is key for understanding the environmental implications of media, both in terms of the technologies currently in circulation and in terms of predicting the devices that will emerge in the coming years. Too often the planet is given short shrift in considerations of the technical, but “it is the earth that provides for media and enables it”, it is “the affordances of its geophysical reality that make technical media happen” (GM 13). Drawing upon Mumford’s writings about “paleotechnics” and “neotechnics” (concepts which Mumford had himself adapted from the work of Patrick Geddes), Parikka emphasizes that both the age of coal (paleotechnics) and the age of electricity (neotechnics) are “grounded in the wider mobilization of the materiality of the earth” (GM 15). Indeed, electric power is often still quite reliant upon the extraction and burning of coal.

    More than just a pithy neologism, Parikka introduces the term “anthrobscene” to highlight the ecological violence inherent in “the massive changes human practices, technologies, and existence have brought across the ecological board” (GM 16-17) shifts that often go under the more morally vague title of “the anthropocene.” For Parikka, “the addition of the obscene is self-explanatory when one starts to consider the unsustainable, politically dubious, and ethically suspicious practices that maintain technological culture and its corporate networks” (A 6). Like a curse word beeped out by television censors, much of the obscenity of the anthropocene goes unheard even as governments and corporations compete with ever greater élan for the privilege of pillaging portions of the planet – Parikka seeks to reinscribe the obscenity.

    The world of high tech media still relies upon the extraction of metals from the earth and, as Parikka shows, a significant portion of the minerals mined today are destined to become part of media technologies. Therefore, in contemplating geology and media it can be fruitful to approach media using Zielinski’s notion of “deep time” wherein “durations become a theoretical strategy of resistance against the linear progress myths that impose a limited context for understanding technological change” (GM 37, A 23). Deploying the notion of “deep time” demonstrates the ways in which a “metallic materiality links the earth to the media technological” while also emphasizing the temporality “linked to the nonhuman earth times of decay and renewal” (GM 44, A 30). Thus, the concept of “deep time” can be particularly useful in thinking through the nonhuman scales of time involved in media, such as the centuries required for e-waste to decompose.

    Whereas “deep time” provides insight into media’s temporal quality, “psychogeophysics” presents a method for thinking through the spatial. “Psychogeophysics” is a variation of the Situationist idea of “the psychogeographical,” but where the Situationists focused upon the exploration of the urban environment, “psychogeophysics” (which appeared as a concept in a manifesto in Mute magazine) moves beyond the urban sphere to contemplate the oblate spheroid that is the planet. What the “geophysical twist brings is a stronger nonhuman element that is nonetheless aware of the current forms of exploitation but takes a strategic point of view on the nonorganic too” (GM 64). Whereas an emphasis on the urban winds up privileging the world built by humans, the shift brought by “psychogeophysics” allows people to bear witness to “a cartography of architecture of the technological that is embedded in the geophysical” (GM 79).

    The material aspects of media technology consist of many areas where visibility has broken down. In many cases this is suggestive of an almost willful disregard (ignoring exploitative mining and labor conditions as well as the harm caused by e-waste), but in still other cases it is reflective of the minute scales that materiality can assume (such as metallic dust that dangerously fills workers’ lungs after they shine iPad cases). The devices that are surrounded by an optimistic aura in some nations, thus obtain this sheen at the literal expense of others: “the residue of the utopian promise is registered in the soft tissue of a globally distributed cheap labor force” (GM 89). Indeed, those who fawn with religious adoration over the newest high-tech gizmo may simply be demonstrating that nobody they know personally will be sickened in assembling it, or be poisoned by it when it becomes e-waste. An emphasis on geology and materiality, as Parikka demonstrates, shows that the era of digital capitalism contains many echoes of the exploitation characteristic of bygone periods – appropriation of resources, despoiling of the environment, mistreatment of workers, exportation of waste, these tragedies have never ceased.

    Digital media is excellent at creating a futuristic veneer of “smart” devices and immaterial sounding aspects such as “the cloud,” and yet a material analysis demonstrates the validity of the old adage “the more things change the more they stay the same.” Despite efforts to “green” digital technology, “computer culture never really left the fossil (fuel) age anyway” (GM 111). But beyond relying on fossil fuels for energy, these devices can themselves be considered as fossils-to-be as they go to rest in dumps wherein they slowly degrade, so that “we can now ask what sort of fossil layer is defined by the technical media condition…our future fossils layers are piling up slowly but steadily as an emblem of an apocalypse in slow motion” (GM 119). We may not be surrounded by dinosaurs and trilobites, but the digital media that we encounter are tomorrow’s fossils – which may be quite mysterious and confounding to those who, thousands of years hence, dig them up. Businesses that make and sell digital media thrive on a sense of time that consists of planned obsolescence, regular updates, and new products, but to take responsibility for the materiality of these devices requires that “notions of temporality must escape any human-obsessed vocabulary and enter into a closer proximity with the fossil” (GM 135). It requires a woebegone recognition that our technological detritus may be present on the planet long after humanity has vanished.

    The living dead that lurch alongside humanity today are not the zombies of popular entertainment, but the undead media devices that provide the screens for consuming such distractions. Already fossils, bound to be disposed of long before they stop working, it is vital “to be able to remember that media never dies, but remains as toxic residue,” and thus “we should be able to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41). We live with these zombies, we live among them, and even when we attempt to pack them off to unseen graveyards they survive under the surface. A Geology of Media is thus “a call for further materialization of media not only as media but as that bit which it consists of: the list of the geophysical elements that give us digital culture” (GM 139).

    It is not simply that “machines themselves contain a planet” (GM 139) but that the very materiality of the planet is becoming riddled with a layer of fossilized machines.

    * * *

    The image of the world conjured up by Parikka in A Geology of Media and The Anthrobscene is far from comforting – after all, Parikka’s preference for talking about “the anthrobscene” does much to set a funereal tone. Nevertheless, these two books by Parikka do much to demonstrate that “obscene” may be a very fair word to use when discussing today’s digital media. By emphasizing the materiality of media, Parikka avoids the thorny discussions of the benefits and shortfalls of various platforms to instead pose a more challenging ethical puzzle: even if a given social media platform can be used for ethical ends, to what extent is this irrevocably tainted by the materiality of the device used to access these platforms? It is a dark assessment which Parikka describes without much in the way of optimistic varnish, as he describes the anthropocene (on the first page of The Anthrobscene) as “a concept that also marks the various violations of environmental and human life in corporate practices and technological culture that are ensuring that there won’t be much of humans in the future scene of life” (A 1).

    And yet both books manage to avoid the pitfall of simply coming across as wallowing in doom. Parikka is not pining for a primal pastoral fantasy, but is instead seeking to provide new theoretical tools with which his readers can attempt to think through the materiality of media. Here, Parikka’s emphasis on the way that digital technology is still heavily reliant upon mining and fossil fuels acts as an important counter to gee-whiz futurism. Similarly Parikka’s mobilization of the notion of “deep time” and fossils acts as an important contribution to thinking through the lifecycles of digital media. Dwelling on the undeath of a smartphone slowly decaying in an e-waste dump over centuries is less about evoking a fearful horror than it is about making clear the horribleness of technological waste. The discussion of “deep time” seems like it can function as a sort of geological brake on accelerationist thinking, by emphasizing that no matter how fast humans go, the planet has its own sense of temporality. Throughout these two slim books, Parikka draws upon a variety of cultural works to strengthen his argument: ranging from the earth-pillaging mad scientist of Arthur Conan Doyle’s Professor Challenger, to the Coal Fired Computers of Yokokoji-Harwood (YoHa), to Molleindustria’s smartphone game “Phone Story” which plays out on a smartphone’s screen the tangles of extraction, assembly, and disposal that are as much a part of the smartphone’s story as whatever uses for which the final device is eventually used. Cultural and artistic works, when they intend to, may be able to draw attention to the obscenity of the anthropocene.

    The Anthrobscene and A Geology of Media are complementary texts, but one need not read both in order to understand the other. As part of the University of Minnesota Press’s “Forerunners” series, The Anthrobscene is a small book (in terms of page count and physical size) which moves at a brisk pace, in some ways it functions as a sort of greatest hits version of A Geology of Media – containing many of the essential high points, but lacking some of the elements that ultimately make A Geology of Media a satisfying and challenging book. Yet the duo of books work wonderfully together as The Anthrobscene acts as a sort of primer – that a reader of both books will detect many similarities between the two is not a major detraction, for these books tell a story that often goes unheard today.

    Those looking for neat solutions to the anthropocene’s quagmire will not find them in either of these books – and as these texts are primarily aimed at an academic audience this is not particularly surprising. These books are not caught up in offering hope – be it false or genuine. At the close of A Geology of Media when Parikka discusses the need “to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41) – this does not appear as a perfect panacea but as way of possibly adjusting. Parikka is correct in emphasizing the ways in which the extractive regimes that characterized the paleotechnic continue on in the neotechnic era, and this is a point which Mumford himself made regarding the way that the various “technic” eras do not represent clean breaks from each other. As Mumford put it, “the new machines followed, not their own pattern, but the pattern laid down by previous economic and technical structures” (Mumford 2010, 236) – in other words, just as Parikka explains, the paleotechnic survives well into the neotechnic. The reason this is worth mentioning is not to challenge Parikka, but to highlight that the “neotechnic” is not meant as a characterization of a utopian technical epoch that has parted ways with the exploitation that had characterized the preceding period. For Mumford the need was to move beyond the anthropocentrism of the neotechnic period and move towards what he called (in The Culture of Cities) the “biotechnic” a period wherein “technology itself will be oriented toward the culture of life” (Mumford 1938, 495). Granted, as Mumford’s later work and as these books by Parikka make clear – instead of arriving at the “biotechnic” what we might get is instead the anthrobscene. And reading these books by Parikka makes it clear that one could not characterize the anthrobscene as being “oriented toward the culture of life” – indeed, it may be exactly the opposite. Or, to stick with Mumford a bit longer, it may be that the anthrobscene is the result of the triumph of “authoritarian technics” over “democratic” ones. Nevertheless, the true dirge like element of Parikka’s books is that they raise the possibility that it may well be too late to shift paths – that the neotechnic was perhaps just a coat of fresh paint applied to hide the rusting edifice of paleotechnics.

    A Geology of Media and The Anthrobscene are conceptual toolkits, they provide the reader with the drills and shovels they need to dig into the materiality of digital media. But what these books make clear is that along with the pickaxe and the archeologist’s brush, if one is going to dig into the materiality of media one also needs a gasmask if one is to endure the noxious fumes. Ultimately, what Parikka shows is that the Situationist inspired graffiti of May 1968 “beneath the streets – the beach” needs to be rewritten in the anthrobscene.

    Perhaps a fitting variation for today would read: “beneath the streets – the graveyard.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Mumford, Lewis. 2010. Technics and Civilization. Chicago: University of Chicago Press.

    Mumford, Lewis. 1938. The Culture of Cities. New York: Harcourt, Brace and Company.

  • The Social Construction of Acceleration

    The Social Construction of Acceleration

    Judy Wajcman, Pressed for Time (Chicago, 2014)a review of Judy Wajcman, Pressed for Time: The Acceleration of Life in Digital Capitalism (Chicago, 2014)
    by Zachary Loeb

    ~

    Patience seems anachronistic in an age of high speed downloads, same day deliveries, and on-demand assistants who can be summoned by tapping a button. Though some waiting may still occur the amount of time spent in anticipation seems to be constantly diminishing, and every day a new bevy of upgrades and devices promise that tomorrow things will be even faster. Such speed is comforting for those who feel that they do not have a moment to waste. Patience becomes a luxury for which we do not have time, even as the technologies that claimed they would free us wind up weighing us down.

    Yet it is far too simplistic to heap the blame for this situation on technology, as such. True, contemporary technologies may be prominent characters in the drama in which we are embroiled, but as Judy Wajcman argues in her book Pressed for Time, we should not approach technology as though it exists separately from the social, economic, and political factors that shape contemporary society. Indeed, to understand technology today it is necessary to recognize that “temporal demands are not inherent to technology. They are built into our devices by all-too-human schemes and desires” (3). In Wajcman’s view, technology is not the true culprit, nor is it an out-of-control menace. It is instead a convenient distraction from the real forces that make it seem as though there is never enough time.

    Wajcman sets a course that refuses to uncritically celebrate technology, whilst simultaneously disavowing the damning of modern machines. She prefers to draw upon “a social shaping approach to technology” (4) which emphasizes that the shape technology takes in a society is influenced by many factors. If current technologies leave us feeling exhausted, overwhelmed, and unsatisfied it is to our society we must look for causes and solutions – not to the machine.

    The vast array of Internet-connected devices give rise to a sense that everything is happening faster, that things are accelerating, and that compared to previous epochs things are changing faster. This is the kind of seemingly uncontroversial belief that Wajcman seeks to counter. While there is a present predilection for speed, the ideas of speed and acceleration remain murky, which may not be purely accidental when one considers “the extent to which the agenda for discussing the future of technology is set by the promoters of new technological products” (14). Rapid technological and societal shifts may herald the emergence of a “acceleration society” wherein speed increases even as individuals experience a decrease of available time. Though some would describe today’s world (at least in affluent nations) as being a synecdoche of the “acceleration society,” it would be a mistake to believe this to be a wholly new invention.

    Nevertheless the instantaneous potential of information technologies may seem to signal a break with the past – as the sort of “timeless time” which “emerged in financial markets…is spreading to every realm” (19). Some may revel in this speed even as others put out somber calls for a slow-down, but either approach risks being reductionist. Wajcman pushes back against the technological determinism lurking in the thoughts of those who revel and those who rebel, noting “that all technologies are inherently social in that they are designed, produced, used and governed by people” (27).

    Both today and yesterday “we live our lives surrounded by things, but we tend to think about only some of them as being technologies” (29). The impacts of given technologies depend upon the ways in which they are actually used, and Wajcman emphasizes that people often have a great deal of freedom in altering “the meanings and deployment of technologies” (33).

    Over time certain technologies recede into the background, but the history of technology is of a litany of devices that made profound impacts in determining experiences of time and speed. After all, the clock is itself a piece of technology, and thus we assess our very lack of time by looking to a device designed to measure its passage. The measurement of time was a technique used to standardize – and often exploit – labor, and the ability to carefully keep track of time gave rise to an ideology in which time came to be interchangeable with money. As a result speed came to be associated with profit even as slowness became associated with sloth. The speed of change became tied up in notions of improvement and progress, and thus “the speed of change becomes a self-evident good” (44). The speed promised by inventions are therefore seen as part of the march of progress, though a certain irony emerges as widespread speed leads to new forms of slowness – the mass diffusion of cars leading to traffic jams, And what was fast yesterday is often deemed slow today. As Wajcman shows, the experience of time compression that occurs tied to “our valorization of a busy lifestyle, as well as our profound ambivalence toward it” (58), has roots that go far back.

    Time takes on an odd quality – to have it is a luxury, even as constant busyness becomes a sign of status. A certain dissonance emerges wherein individuals feel that they have less time even as studies show that people are not necessarily working more hours. For Wajcman much of the explanation is related to “real increases in the combined work commitments of family members as it is about changes in the working time of individuals” with such “time poverty” being experienced particularly acutely “among working mothers, who juggle work, family, and leisure” (66). To understand time pressure it is essential to consider the degree to which people are free to use their time as they see fit.

    Societal pressures on the time of men and women differ, and though the hours spent doing paid labor may not have shifted dramatically, the hours parents (particularly mothers) spend performing unpaid labor remains high. Furthermore, “despite dramatic improvements in domestic technology, the amount of time spent on household tasks has not actually shown any corresponding dramatic decline” (68). Though household responsibilities can be shared equitably between partners, much of the onus still falls on women. As a busy event-filled life becomes a marker of status for adults so too may they attempt to bestow such busyness on the whole family, but busy parents needing to chaperone and supervise busy children only creates a further crunch on time. As Wajcman notes “perhaps we should be giving as much attention to the intensification of parenting as to the intensification of work” (82).

    Yet the story of domestic, unpaid and unrecognized, labor is a particularly strong example of a space wherein the promises of time-saving technological fixes have fallen short. Instead, “devices allegedly designed to save labor time fail to do so, and in some cases actually increase the time needed for the task” (111). The variety of technologies marketed for the household are often advertised as time savers, yet altering household work is not the same as eliminating it – even as certain tasks continually demand a significant investment of real time.

    Many of the technologies that have become mainstays of modern households – such as the microwave – were not originally marketed as such, and thus the household represents an important example of the way in which technologies “are both socially constructed and society shaping” (122). Of further significance is the way in which changing labor relations have also lead to shifts in the sphere of domestic work, wherein those who can afford it are able to buy themselves time through purchasing food from restaurants or by employing others for tasks such as child care and cleaning. Though the image of “the home of the future,” courtesy of the Internet of Things, may promise an automated abode, Wajcman highlights that those making and selling such technologies replicate society’s dominant blind spot for the true tasks of domestic labor. Indeed, the Internet of Things tends to “celebrate technology and its transformative power at the expense of home as a lived practice.” (130) Thus, domestic technologies present an important example of the way in which those designing and marketing technologies instill their own biases into the devices they build.

    Beyond the household, information communications technologies (ICTs) allow people to carry their office in their pocket as e-mails and messages ping them long after the official work day has ended. However, the idea “of the technologically tethered worker with no control over their own time…fails to convey the complex entanglement of contemporary work practices, working time, and the materiality of technical artifacts” (88). Thus, the problem is not that an individual can receive e-mail when they are off the clock, the problem is the employer’s expectation that this worker should be responding to work related e-mails while off the clock – the issue is not technological, it is societal. Furthermore, Wajcman argues, communications technologies permit workers to better judge whether or not something is particularly time sensitive. Though technology has often been used by employers to control employees, approaching communications technologies from an STS position “casts doubt on the determinist view that ICTs, per se, are driving the intensification of work” (107). Indeed some workers may turn to such devices to help manage this intensification.

    Technologies offer many more potentialities than those that are presented in advertisements. Though the ubiquity of communications devices may “mean that more and more of our social relationships are machine-mediated” (138), the focus should be as much on the word “social” as on the word “machine.” Much has been written about the way that individuals use modern technologies and the ways in which they can give rise to families wherein parents and children alike are permanently staring at a screen, but Wajcman argues that these technologies should “be regarded as another node in the flows of affect that create and bind intimacy” (150). It is not that these devices are truly stealing people’s time, but that they are changing the ways in which people spend the time they have – allowing harried individuals to create new forms of being together which “needs to be understood as adding a dimension to temporal experience” (158) which blurs boundaries between work and leisure.

    The notion that the pace of life has been accelerated by technological change is a belief that often goes unchallenged; however, Wajcman emphasizes that “major shifts in the nature of work, the composition of families, ideas about parenting, and patterns of consumption have all contributed to our sense that the world is moving faster than hitherto” (164). The experience of acceleration can be intoxicating, and the belief in a culture of improvement wrought by technological change may be a rare glimmer of positivity amidst gloomy news reports. However, “rapid technological change can actually be conservative, maintaining or solidifying existing social arrangements” (180). At moments when so much emphasis is placed upon the speed of technologically sired change the first step may not be to slow-down but to insist that people consider the ways in which these machines have been socially constructed, how they have shaped society – and if we fear that we are speeding towards a catastrophe than it becomes necessary to consider how they can be socially constructed to avoid such a collision.

    * * *

    It is common, amongst current books assessing the societal impacts of technology, for authors to present themselves as critical while simultaneously wanting to hold to an unshakable faith in technology. This often leaves such texts in an odd position: they want to advance a radical critique but their argument remains loyal to a conservative ideology. With Pressed for Time, Judy Wajcman, has demonstrated how to successfully achieve the balance between technological optimism and pessimism. It is a great feat, and Pressed for Time executes this task skillfully. When Wajcman writes, towards the end of the book, that she wants “to embrace the emancipatory potential of technoscience to create new meanings and new worlds while at the same time being its chief critic” (164) she is not writing of a goal but is affirming what she has achieved with Pressed for Time (a similar success can be attributed to Wajcman’s earlier books TechnoFeminism (Polity, 2004) and the essential Feminism Confronts Technology (Penn State, 1991).

    By holding to the framework of the social shaping of technology, Pressed for Time provides an investigation of time and speed that is grounded in a nuanced understanding of technology. It would have been easy for Wajcman to focus strictly on contemporary ICTs, but what her argument makes clear is that to do so would have been to ignore the facts that make contemporary technology understandable. A great success of Pressed for Time is the way in which Wajcman shows that the current sensation of being pressed for time is not a modern invention. Instead, the emphasis on speed as being a hallmark of progress and improvement is a belief that has been at work for decades. Wajcman avoids the stumbling block of technological determinism and carefully points out that falling for such beliefs leads to critiques being directed incorrectly. Written in a thoroughly engaging style, Pressed for Time is an academic book that can serve as an excellent introduction to the terminology and style of STS scholarship.

    Throughout Pressed for Time, Wajcman repeatedly notes the ways in which the meanings of technologies transcend what a device may have been narrowly intended to do. For Wajcman people’s agency is paramount as people have the ability to construct meaning for technology even as such devices wind up shaping society. Yet an area in which one could push back against Wajcman’s views would be to ask if communications technologies have shaped society to such an extent that it is becoming increasingly difficult to construct new meanings for them. Perhaps the “slow movement,” which Wajcman describes as unrealistic for “we cannot in fact choose between fast and slow, technology and nature” (176), is best perceived as a manifestation of the sense that much of technology’s “emancipatory potential” has gone awry – that some technologies offer little in the way of liberating potential. After all, the constantly connected individual may always feel rushed – but they may also feel as though they are under constant surveillance, that their every online move is carefully tracked, and that through the rise of wearable technology and the Internet of Things that all of their actions will soon be easily tracked. Wajcman makes an excellent and important point by noting that humans have always lived surrounded by technologies – but the technologies that surrounded an individual in 1952 were not sending every bit of minutiae to large corporations (and governments). Hanging in the background of the discussion of speed are also the questions of planned obsolescence and the mountains of toxic technological trash that wind up flowing from affluent nations to developing ones. The technological speed experienced in one country is the “slow violence” experienced in another. Though to make these critiques is to in no way to seriously diminish Wajcman’s argument, especially as many of these concerns simply speak to the economic and political forces that have shaped today’s technology.

    Pressed for Time is a Rosetta stone for decoding life in high speed, high tech societies. Wajcman deftly demonstrates that the problems facing technologically-addled individuals today are not as new as they appear, and that the solutions on offer are similarly not as wildly inventive as they may seem. Through analyzing studies and history, Wajcman shows the impacts of technologies, while making clear why it is still imperative to approach technology with a consideration of class and gender in mind. With Pressed for Time, Wajcman champions the position that the social shaping of technology framework still provides a robust way of understanding technology. As Wajcman makes clear the way technologies “are interpreted and used depends on the tapestry of social relations woven by age, gender, race, class, and other axes of inequality” (183).

    It is an extremely timely argument.
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Curatorialism as New Left Politics

    Curatorialism as New Left Politics

    by David Berry

    ~
    It is often argued that the left is left increasingly unable to speak a convincing narrative in the digital age. Caught between the neoliberal language of contemporary capitalism and its political articulations linked to economic freedom and choice, and a welfare statism that appears counter-intuitively unappealing to modern political voters and supporters, there is often claimed to be a lacuna in the political imaginary of the left. Here, I want to explore a possible new articulation for a left politics that moves beyond the seeming technophilic and technological determinisms of left accelerationisms and the related contradictions of “fully automated luxury communism”. Broadly speaking, these positions tend to argue for a post-work, post-scarcity economy within a post-capitalist society based on automation, technology and cognitive labour. Accepting these are simplifications of the arguments of the proponents of these two positions the aim is to move beyond the assertion that the embracing of technology itself solves the problem of a political articulation that has to be accepted and embraced by a broader constituency within the population. Technophilic politics is not, of itself, going to be enough to convince an electorate, nor a population, to move towards leftist conceptualisations of possible restructuring or post-capitalist economics. However, it seems to me that the abolition of work is not a desirable political programme for the majority of the population, nor does a seemingly utopian notion of post-scarcity economics make much sense under conditions of neoliberal economics. Thus these programmes are simultaneously too radical and not radical enough. I also want to move beyond the staid and unproductive arguments often articulated in the UK between a left-Blairism and a more statist orientation associated with a return to traditional left concerns personified in Ed Miliband.

    Instead, I want to consider what a politics of the singularity might be, that is, to follow Fredric Jameson’s conceptualisation of the singularity as “a pure present without a past or a future” such that,

    today we no longer speak of monopolies but of transnational corporations, and our robber barons have mutated into the great financiers and bankers, themselves de-individualized by the massive institutions they manage. This is why, as our system becomes ever more abstract, it is appropriate to substitute a more abstract diagnosis, namely the displacement of time by space as a systemic dominant, and the effacement of traditional temporality by those multiple forms of spatiality we call globalization. This is the framework in which we can now review the fortunes of singularity as a cultural and psychological experience (Jameson 2015: 128).

    That is the removal of temporality of a specific site of politics as such, or the successful ideological deployment of a new framework of understand of oneself within temporality, whether through the activities of the media industries, or through the mediation of digital technologies and computational media. This has the effect of the transformation of temporal experience into new spatial experiences, whether through translating media, or through the intensification of a now that constantly presses upon us and pushes away both historical time, but also the possibility for political articulations of new forms of futurity. Thus the politics of singularity point to spatiality as the key site of political deployment within neoliberalism, and by this process undercuts the left’s arguments which draw simultaneously on a shared historical memory of hard-won rights and benefits, but also the notion of political action to fight for a better future. Indeed, one might ask if green critique of the anthropocene, with its often misanthropic articulations, in some senses draws on some notion of a singularity produced by humanity which has undercut the time of geological or planetary scale change. The only option remaining then is to seek to radically circumscribe, if not outline a radical social imaginary that does not include humans in its conception, and hence to return the planet to the stability of a geological time structure no longer undermined by human activity. Similarly, neoliberal arguments over political imaginaries highlight the intensity and simultaneity of the present mode of capitalist competition and the individualised (often debt-funded) means of engagement with economic life.

    What then might be a politics of the singularity which moved beyond politics that drew on forms of temporality for its legitimation? In other words, how could a politics of spatiality be articulated and deployed which re-enabled the kind of historical project towards a better future for all that was traditionally associated with leftist thought?

    To do this I want to think through the notion of the “curator” that Jameson disparagingly thinks is an outcome of the singularity in terms of artistic practice and experience. He argues, that today we are faced with the “emblematic figure of the curator, who now becomes the demiurge of those floating and dissolving constellations of strange objects we still call art.” Further,

    there is a nastier side of the curator yet to be mentioned, which can be easily grasped if we look at installations, and indeed entire exhibits in the newer postmodern museums, as having their distant and more primitive ancestors in the happenings of the 1960s—artistic phenomena equally spatial, equally ephemeral. The difference lies not only in the absence of humans from the installation and, save for the curator, from the newer museums as such. It lies in the very presence of the institution itself: everything is subsumed under it, indeed the curator may be said to be something like its embodiment, its allegorical personification. In postmodernity, we no longer exist in a world of human scale: institutions certainly have in some sense become autonomous, but in another they transcend the dimensions of any individual, whether master or servant; something that can also be grasped by reminding ourselves of the dimension of globalization in which institutions today exist, the museum very much included (Jameson 2015: 110-111).

    However, Jameson himself makes an important link between spatiality as the site of a contestation and the making-possible of new spaces, something curatorial practice, with its emphasis on the construction, deployment and design of new forms of space points towards. Indeed, Jameson argues in relation to theoretical constructions, “perhaps a kind of curatorial practice, selecting named bits from our various theoretical or philosophical sources and putting them all together in a kind of conceptual installation, in which we marvel at the new intellectual space thereby momentarily produced” (Jameson 2015: 110).

    In contrast, the question for me is the radical possibilities suggested by this event-like construction of new spaces, and how they can be used to reverse or destabilise the time-axis manipulation of the singularity. The question then becomes: could we tentatively think in terms of a curatorial political practice, which we might call curatorialism? Indeed, could we fill out the ways in which this practice could aim to articulate, assemble and more importantly provide a site for a renewal and (re)articulation of left politics? How could this politics be mobilised into the nitty-gritty of actual political practice, policy, and activist politics, and engender the affective relation that inspires passion around a political programme and suggests itself to the kinds of singularities that inhabit contemporary society? To borrow the language of the singularity itself, how could one articulate a new disruptive left politics?

    dostoevsky on curation
    image source: Curate Meme

    At this early stage of thinking, it seems to me that in the first case we might think about how curatorialism points towards the need to move away from concern with internal consistency in the development of a political programme. Curatorialism gathers its strength from the way in which it provides a political pluralism, an assembling of multiple moments into a political constellation that takes into account and articulates its constituent moments. This is the first step in the mapping of the space of a disruptive left politics. This is the development of a spatial politics in as much as, crucially, the programme calls for a weaving together of multiplicity into this constellational form. Secondly, we might think about the way in which this spatial diagram can then be  translated into a temporal project, that is the transformation of a mapping program into a political programme linked to social change. This requires the capture and illumination of the multiple movements of each moment and re-articulation through a process of reframing the condition of possibility in each constellational movement in terms of a political economy that draws from the historical possibilities that the left has made possible previously, but also the need for new concepts and ideas to link the political of necessity to the huge capacity of a left project towards mitigating/and or replacement of a neoliberal capitalist economic system. Lastly, it seems to me that to be a truly curatorial politics means to link to the singularity itself as a force of strength for left politics, such that the development of a mode of the articulation of individual political needs, is made possible through the curatorial mode, and through the development of disruptive left frameworks that links individual need, social justice, institutional support, and left politics that reconnects the passions of interests to the passion for justice and equality with the singularity’s concern with intensification.[1] This can, perhaps, be thought of as the replacement of a left project of ideological purity with a return to the Gramscian notions of strategy and tactics through the deployment of what he called a passive revolution, mobilised partially in the new forms of civil society created through collectivities of singularities within social media, computational devices and the new infrastructures of digital capitalism but also within the through older forms of social institutions, political contestations and education.[2]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] This remains a tentative articulation that is inspired by the power of knowledge-based economies both to create the conditions of singularity through the action of time-axis manipulation (media technologies), but also their (arguably) countervailing power to provide the tools, spaces and practices for the contestation of the singularity connected only with a neoliberal political moment. That is, how can these new concept and ideas, together with the frameworks that are suggested in their mobilisation, provide new means of contestation, sociality and broader connections of commonality and political praxis.

    [2] I leave to a later paper the detailed discussion of the possible subjectivities both in and for themselves within a framework of a curatorial politics. But here I am gesturing towards political parties as the curators of programmes of political goals and ends, able then to use the state as a curatorial enabler of such a political programme. This includes the active development of the individuation of political singularities within such a curatorial framework.

    Bibliography

    Jameson, Fredric. 2015. “The Aesthetics of Singularity.” New Left Review, No. 92 (March-April 2015).

    Back to the essay

  • Poetics of Control

    Poetics of Control

    a review of Alexander R. Galloway, The Interface Effect (Polity, 2012)

    by Bradley J. Fest

    ~

    This summer marks the twenty-fifth anniversary of the original French publication of Gilles Deleuze’s seminal essay, “Postscript on the Societies of Control” (1990). A strikingly powerful short piece, “Postscript” remains, even at this late date, one of the most poignant, prescient, and concise diagnoses of life in the overdeveloped digital world of the twenty-first century and the “ultrarapid forms of apparently free-floating control that are taking over from the old disciplines.”[1] A stylistic departure from much of Deleuze’s other writing in its clarity and straightforwardness, the essay describes a general transformation from the modes of disciplinary power that Michel Foucault famously analyzed in Discipline and Punish (1975) to “societies of control.” For Deleuze, the late twentieth century is characterized by “a general breakdown of all sites of confinement—prisons, hospitals, factories, schools, the family.”[2] The institutions that were formerly able to strictly organize time and space through perpetual surveillance—thereby, according to Foucault, fabricating the modern individual subject—have become fluid and modular, “continually changing from one moment to the next.”[3] Individuals have become “dividuals,” “dissolv[ed] . . . into distributed networks of information.”[4]

    Over the past decade, media theorist Alexander R. Galloway has extensively and rigorously elaborated on Deleuze’s suggestive pronouncements, probably devoting more pages in print to thinking about the “Postscript” than has any other single writer.[5] Galloway’s most important work in this regard is his first book, Protocol: How Control Exists after Decentralization (2004). If the figure for the disciplinary society was Jeremy Bentham’s panopticon, a machine designed to induce a sense of permanent visibility in prisoners (and, by extension, the modern subject), Galloway argues that the distributed network, and particularly the distributed network we call the internet, is an apposite figure for control societies. Rhizomatic and flexible, distributed networks historically emerged as an alternative to hierarchical, rigid, centralized (and decentralized) networks. But far from being chaotic and unorganized, the protocols that organize our digital networks have created “the most highly controlled mass media hitherto known. . . . While control used to be a law of society, now it is more like a law of nature. Because of this, resisting control has become very challenging indeed.”[6] To put it another way: if in 1980 Deleuze and Félix Guattari complained that “we’re tired of trees,” Galloway and philosopher Eugene Thacker suggest that today “we’re tired of rhizomes.”[7]

    The imperative to think through the novel challenges presented by control societies and the urgent need to develop new methodologies for engaging the digital realities of the twenty-first century are at the heart of The Interface Effect (2012), the final volume in a trio of works Galloway calls Allegories of Control.[8] Guiding the various inquiries in the book is his provocative claim that “we do not yet have a critical or poetic language in which to represent the control society.”[9] This is because there is an “unrepresentability lurking within information aesthetics” (86). This claim for unrepresentability, that what occurs with digital media is not representation per se, is The Interface Effect’s most significant departure from previous media theory. Rather than rehearse familiar media ecologies, Galloway suggests that “the remediation argument (handed down from McLuhan and his followers including Kittler) is so full of holes that it is probably best to toss it wholesale” (20). The Interface Effect challenges thinking about mimesis that would place computers at the end of a line of increasingly complex modes of representation, a line extending from Plato, through Erich Auerbach, Marshall McLuhan, and Friedrich Kittler, and terminating in Richard Grusin, Jay David Bolter, and many others. Rather than continue to understand digital media in terms of remediation and representation, Galloway emphasizes the processes of computational media, suggesting that the inability to productively represent control societies stems from misunderstandings about how to critically analyze and engage with the basic materiality of computers.

    The book begins with an introduction polemically positioning Galloway’s own media theory directly against Lev Manovich’s field-defining book, The Language of New Media (2001). Contra Manovich, Galloway stresses that digital media are not objects but actions. Unlike cinema, which he calls an ontology because it attempts to bring some aspect of the phenomenal world nearer to the viewer—film, echoing Oedipa Maas’s famous phrase, “projects worlds” (11)—computers involve practices and effects (what Galloway calls an “ethic”) because they are “simply on a world . . . subjecting it to various forms of manipulation, preemption, modeling, and synthetic transformation. . . . The matter at hand is not that of coming to know a world, but rather that of how specific, abstract definitions are executed to form a world” (12, 13, 23). Or to take two other examples Galloway uses to positive effect: the difference can be understood as that between language, which describes and represents, encoding a world, versus calculus, which does or simulates doing something to the world; calculus is a “system of reasoning, an executable machine” (22). Though Galloway does more in Gaming: Essays on Algorithmic Culture (2006) to fully develop a way of analyzing computational media that privileges action over representation, The Interface Effect theoretically grounds this important distinction between mimesis and action, description and process.[10] Further, it constitutes a bold methodological step away from some of the dominant ways of thinking about digital media that simultaneously offers its readers new ways to connect media studies more firmly to politics.

    Further distinguishing himself from writers like Manovich, Galloway says that there has been a basic misunderstanding regarding media and mediation, and that the two systems are “violently unconnected” (13). Galloway demonstrates, in contrast to such thinkers as Kittler, that there is an old line of thinking about mediation that can be traced very far back and that is not dependent on thinking about media as exclusively tied to nineteenth and twentieth century communications technology:

    Doubtless certain Greek philosophers had negative views regarding hypomnesis. Yet Kittler is reckless to suggest that the Greeks had no theory of mediation. The Greeks indubitably had an intimate understanding of the physicality of transmission and message sending (Hermes). They differentiated between mediation as immanence and mediation as expression (Iris versus Hermes). They understood the mediation of poetry via the Muses and their techne. They understood the mediation of bodies through the “middle loving” Aphrodite. They even understood swarming and networked presence (in the incontinent mediating forms of the Eumenides who pursued Orestes in order to “process” him at the procès of Athena). Thus we need only look a little bit further to shed this rather vulgar, consumer-electronics view of media, and instead graduate into the deep history of media as modes of mediation. (15)

    Galloway’s point here is that the larger contemporary discussion of mediation that he is pursuing in The Interface Effect should not be restricted to merely the digital artifacts that have occasioned so much recent theoretical activity, and that there is an urgent need for deeper histories of mediation. Though the book appears to be primarily concerned with the twentieth and twenty-first century, this gesture toward the Greeks signals the important work of historicization that often distinguishes much of Galloway’s work. In “Love of the Middle” (2014), for example, which appears in the book Excommunication (2014), co-authored with Thacker and McKenzie Wark, Galloway fully develops a rigorous reading of Greek mediation, suggesting that in the Eumenides, or what the Romans called the Furies, reside a notable historical precursor for understanding the mediation of distributed networks.[11]

    In The Interface Effect these larger efforts at historicization allow Galloway to always understand “media as modes of mediation,” and consequently his big theoretical step involves claiming that “an interface is not a thing, an interface is an effect. It is always a process or a translation” (33). There are a variety of positive implications for the study of media understood as modes of mediation, as a study of interface effects. Principal amongst these are the rigorous methodological possibilities Galloway’s focus emphasizes.

    In this, methodologically and otherwise, Galloway’s work in The Interface Effect resembles and extends that of his teacher Fredric Jameson, particularly the kind of work found in The Political Unconscious (1981). Following Jameson’s emphasis on the “poetics of social forms,” Galloway’s goal is “not to reenact the interface, much less to ‘define’ it, but to identify the interface itself as historical. . . . This produces . . . a perspective on how cultural production and the socio-historical situation take form as they are interfaced together” (30). The Interface Effect firmly ties the cultural to the social, economic, historical, and political, finding in a variety of locations ways that interfaces function as allegories of control. “The social field itself constitutes a grand interface, an interface between subject and world, between surface and source, and between critique and the objects of criticism. Hence the interface is above all an allegorical device that will help us gain some perspective on culture in the age of information” (54). The power of looking at the interface as an allegorical device, as a “control allegory” (30), is demonstrated throughout the book’s relatively wide-ranging analyses of various interface effects.

    Chapter 1, “The Unworkable Interface,” historicizes some twentieth century transformations of the interface, concisely summarizing a history of mediation by moving from Norman Rockwell’s “Triple Self-Portrait” (1960), through Mad Magazine’s satirization of Rockwell, to World of Warcraft (2004-2015). Viewed from the level of the interface, with all of its nondiegetic menus and icons and the ways it erases the line between play and labor, Galloway demonstrates both here and in the last chapter that World of Warcraft is a powerful control allegory: “it is not an avant-garde image, but, nevertheless, it firmly delivers an avant-garde lesson in politics” (44).[12] Further exemplifying the importance of historicizing interfaces, Chapter 2 continues to demonstrate the value of approaching interface effects allegorically. Galloway finds “a formal similarity between the structure of ideology and the structure of software” (55), arguing that software “is an allegorical figure for the way in which . . . political and social realities are ‘resolved’ today: not through oppression or false consciousness . . . but through the ruthless rule of code” (76). Chapter 4 extends such thinking toward a masterful reading of the various mediations at play in a show such as 24 (2001-2010, 2014), arguing that 24 is political not because of its content but “because the show embodies in its formal technique the essential grammar of the control society, dominated as it is by specific network and informatic logics” (119). In short, The Interface Effect continually demonstrates the potent critical tools approaching mediation as allegory can provide, reaffirming the importance of a Jamesonian approach to cultural production in the digital age.

    Whether or not readers are convinced, however, by Galloway’s larger reworking of the field of digital media studies, his emphasis on attending to contemporary cultural artifacts as allegories of control, or his call in the book’s conclusion for a politics of “whatever being” probably depends upon their thoughts about the unrepresentability of today’s global networks in Chapter 3, “Are Some Things Unrepresentable?” His answer to the chapter’s question is, quite simply, “Yes.” Attempts to visualize the World Wide Web only result in incoherent repetition: “every map of the internet looks the same,” and as a result “no poetics is possible in this uniform aesthetic space” (85). He argues that, in the face of such an aesthetic regime, what Jacques Rancière calls a “distribution of the sensible”[13]:

    The point is not so much to call for a return to cognitive mapping, which of course is of the highest importance, but to call for a poetics as such for this mysterious new machinic space. . . . Today’s systemics have no contrary. Algorithms and other logical structures are uniquely, and perhaps not surprisingly, monolithic in their historical development. There is one game in town: a positivistic dominant of reductive, systemic efficiency and expediency. Offering a counter-aesthetic in the face of such systematicity is the first step toward building a poetics for it, a language of representability adequate to it. (99)

    There are, to my mind, two ways of responding to Galloway’s call for a poetics as such in the face of the digital realities of contemporaneity.

    On the one hand, I am tempted to agree with him. Galloway is clearly signaling his debt to some of Jameson’s more important large claims and is reviving the need “to think the impossible totality of the contemporary world system,” what Jameson once called the “technological” or “postmodern sublime.”[14] But Galloway is also signaling the importance of poesis for this activity. Not only is Jamesonian “cognitive mapping” necessary, but the totality of twenty-first century digital networks requires new imaginative activity, a counter-aesthetics commensurate with informatics. This is an immensely attractive position, at least to me, as it preserves a space for poetic, avant-garde activity, and indeed, demands that, all evidence to the contrary, the imagination still has an important role to play in the face of societies of control. (In other words, there may be some “humanities” left in the “digital humanities.”[15]) Rather than suggesting that the imagination has been utterly foreclosed by the cultural logic of late capitalism—that we can no longer imagine any other world, that it is easier to imagine the end of the world than a better one—Galloway says that there must be a reinvestment in the imagination, in poetics as such, that will allow us to better represent, understand, and intervene in societies of control (though not necessarily to imagine a better world; more on this below). Given the present landscape, how could one not be attracted to such a position?

    On the other hand, Galloway’s argument hinges on his claim that such a poetics has not emerged and, as Patrick Jagoda and others have suggested, one might merely point out that such a claim is demonstrably false.[16] Though I hope I hardly need to list some of the significant cultural products across a range of media that have appeared over the last fifteen years that critically and complexly engage with the realities of control (e.g., The Wire [2002-08]), it is not radical to suggest that art engaged with pressing contemporary concerns has appeared and will continue to appear, that there are a variety of significant artists who are attempting to understand, represent, and cope with the distributed networks of contemporaneity. One could obviously suggest Galloway’s argument is largely rhetorical, a device to get his readers to think about the different kinds of poesis control societies, distributed networks, and interfaces call for, but this blanket statement threatens to shut down some of the vibrant activity that is going on all over the world commenting upon the contemporary situation. In other words, yes we need a poetics of control, but why must the need for such a poetics hinge on the claim that there has not yet emerged “a critical or poetic language in which to represent the control society”? Is not Galloway’s own substantial, impressive, and important decade-long intellectual project proof that people have developed a critical language that is capable of representing the control society? I would certainly answer in the affirmative.

    There are some other rhetorical choices in the conclusion of The Interface Effect that, though compelling, deserve to be questioned, or at least highlighted. I am referring to Galloway’s penchant—following another one of his teachers at Duke, Michael Hardt—for invoking a Bartlebian politics, what Galloway calls “whatever being,” as an appropriate response to present problems.[17] In Hardt and Antonio Negri’s Empire (2000), in the face of the new realities of late capitalism—the multitude, the management of hybridities, the non-place of Empire, etc.—they propose that Herman Melville’s “Bartleby in his pure passivity and his refusal of any particulars presents us with a figure of generic being, being as such, being and nothing more. . . . This refusal certainly is the beginning of a liberatory politics, but it is only a beginning.”[18] Bartleby, with his famous response of “‘I would prefer not to,’”[19] has been frequently invoked by such substantial figures as Giorgio Agamben in the 1990s and Slavoj Žižek in the 2000s (following Hardt and Negri). Such thinkers have frequently theorized Bartleby’s passive negativity as a potentially radical political position, and perhaps the only one possible in the face of global economic realities.[20] (And indeed, it is easy enough to read, say, Occupy Wall Street as a Bartlebian political gesture.) Galloway’s response to the affective postfordist labor of digital networks, that “each and every day, anyone plugged into a network is performing hour after hour of unpaid micro labor” (136), is similarly to withdraw, to “demilitarize being. Stand down. Cease participating” (143).

    Like Hardt and Negri and so many others, Galloway’s “whatever being” is a response to the failures of twentieth century emancipatory politics. He writes:

    We must stress that it is not the job of politics to invent a new world. On the contrary it is the job of politics to make all these new worlds irrelevant. . . . It is time now to subtract from this world, not add to it. The challenge today is not one of political or moral imagination, for this problem was solved ages ago—kill the despots, surpass capitalism, inclusion of the excluded, equality for all of humanity, end exploitation. The world does not need new ideas. The challenge is simply to realize what we already know to be true. (138-39)

    And thus the tension of The Interface Effect is between this call for withdrawal, to work with what there is, to exploit protocological possibility, etc., and the call for a poetics of control, a poesis capable of representing control societies, which to my mind implies imagination (and thus, inevitably, something different, if not new). If there is anything wanting about the book it is its lack of clarity about how these two critical projects are connected (or indeed, if they are perhaps the same thing!). Further, it is not always clear what exactly Galloway means by “poetics” nor how a need for a poetics corresponds to the book’s emphasis on understanding mediation as process over representation, action over objects. This lack of clarity may be due in part to the fact that, as Galloway indicates in his most recent work, Laruelle: Against the Digital (2014), there is some necessary theorization that he needs to do before he can adequately address the digital head-on. As he writes in the conclusion to that book: “The goal here has not been to elucidate, promote, or disparage contemporary digital technologies, but rather to draft a simple prolegomenon for future writing on digitality and philosophy.”[21] In other words, it seems like Allegories of Control, The Exploit: A Theory of Networks (2007), and Laruelle may constitute the groundwork for an even more ambitious confrontation with the digital, one where the kinds of tensions just noted might dissolve. As such, perhaps the reinvocation of a Bartlebian politics of withdrawal at the end of The Interface Effect is merely a kind of stop-gap, a place-holder before a more coherent poetics of control can emerge (as seems to be the case for the Hardt and Negri of Empire). Although contemporary theorists frequently invoke Bartleby, he remains a rather uninspiring figure.

    These criticisms aside, however, Galloway’s conclusion of the larger project that is Allegories of Control reveals him to be a consistently accessible and powerful guide to the control society and the digital networks of the twenty-first century. If the new directions in his recent work are any indication, and Laruelle is merely a prolegomenon to future projects, then we should perhaps not despair at all about the present lack of a critical language for representing control societies.

    _____

    Bradley J. Fest teaches literature at the University of Pittsburgh. At present he is working on The Nuclear Archive: American Literature Before and After the Bomb, a book investigating the relationship between nuclear and information technology in twentieth and twenty-first century American literature. He has published articles in boundary 2, Critical Quarterly, and Studies in the Novel; and his essays have appeared in David Foster Wallace and “The Long Thing” (2014) and The Silence of Fallout (2013). The Rocking Chair, his first collection of poems, is forthcoming from Blue Sketch Press. He blogs at The Hyperarchival Parallax.

    Back to the essay
    _____

    [1] Though best-known in the Anglophone world via the translation that appeared in 1992 in October as “Postscript on the Societies of Control,” the piece appears as “Postscript on Control Societies,” in Gilles Deleuze, Negotiations: 1972-1990, trans. Martin Joughin (New York: Columbia University Press, 1995), 178. For the original French see Gilles Deleuze, “Post-scriptum sur des sociétés de contrôle,” in Pourparlers, 1972-1990 (Paris: Les Éditions de Minuit, 1990), 240-47. The essay originally appeared as “Les sociétés de contrôle,” L’Autre Journal, no. 1 (May 1990). Further references are to the Negotiations version.

    [2] Ibid.

    [3] Ibid., 179.

    [4] Alexander R. Galloway, Protocol: How Control Exists after Decentralization (Cambridge, MA: MIT Press, 2004), 12n18.

    [5] In his most recent book, Galloway even goes so far as to ask about the “Postscript”: “Could it be that Deleuze’s most lasting legacy will consist of 2,300 words from 1990?” (Alexander R. Galloway, Laruelle: Against the Digital [Minneapolis: University of Minnesota Press, 2014], 96, emphases in original). For Andrew Culp’s review of Laruelle for The b2 Review, see “From the Decision to the Digital.”

    [6] Galloway, Protocol, 147.

    [7] Gilles Deleuze and Félix Guattari, A Thousand Plateaus: Capitalism and Schizophrenia, trans. Brian Massumi (Minneapolis: University of Minnesota Press, 1987), 15; and Alexander R. Galloway and Eugene Thacker, The Exploit: A Theory of Networks (Minneapolis: University of Minnesota Press, 2007), 153. For further discussions of networks see Alexander R. Galloway, “Networks,” in Critical Terms for Media Studies, ed. W. J. T. Mitchell and Mark B. N. Hansen (Chicago: University of Chicago Press), 280-96.

    [8] The other books in the trilogy include Protocol and Alexander R. Galloway, Gaming: Essays on Algorithmic Culture (Minneapolis: University of Minnesota Press, 2006).

    [9] Alexander R. Galloway, The Interface Effect (Malden, MA: Polity, 2012), 98. Hereafter, this work is cited parenthetically.

    [10] See especially Galloway’s masterful first chapter of Gaming, “Gamic Action, Four Moments,” 1-38. To my mind, this is one of the best primers for critically thinking about videogames, and it does much to fundamentally ground the study of videogames in action (rather than, as had previously been the case, in either ludology or narratology).

    [11] See Alexander R. Galloway, “Love of the Middle,” in Excommunication: Three Inquiries in Media and Mediation, by Alexander R. Galloway, Eugene Thacker, and McKenzie Wark (Chicago: University of Chicago Press, 2014), 25-76.

    [12] This is also something he touched on in his remarkable reading of Donald Rumsfeld’s famous “unknown unknowns.” See Alexander R. Galloway, Warcraft and Utopia,” Ctheory.net (16 February 2006). For a discussion of labor in World of Warcraft, see David Golumbia, “Games Without Play,” in “Play,” special issue, New Literary History 40, no. 1 (Winter 2009): 179-204.

    [13] See the following by Jacques Rancière: The Politics of Aesthetics: The Distribution of the Sensible, trans. Gabriel Rockhill (New York: Continuum, 2004), and “Are Some Things Unrepresentable?” in The Future of the Image, trans. Gregory Elliott (New York: Verso, 2007), 109-38.

    [14] Fredric Jameson, Postmodernism; or, the Cultural Logic of Late Capitalism (Durham, NC: Duke University Press, 1991), 38.

    [15] For Galloway’s take on the digital humanities more generally, see his “Everything Is Computational,” Los Angeles Review of Books (27 June 2013), and “The Cybernetic Hypothesis,” differences 25, no. 1 (Spring 2014): 107-31.

    [16] See Patrick Jagoda, introduction to Network Aesthetics (Chicago: University of Chicago Press, forthcoming 2015).

    [17] Galloway’s “whatever being” is derived from Giorgio Agamben, The Coming Community, trans. Michael Hardt (Minneapolis: University of Minnesota Press, 1993).

    [18] Michael Hardt and Antonio Negri, Empire (Cambridge, MA: Harvard University Press, 2000), 203, 204.

    [19] Herman Melville, “Bartleby, The Scrivener: A Story of Wall-street,” in Melville’s Short Novels, critical ed., ed. Dan McCall (New York: W. W. Norton, 2002), 10.

    [20] See Giorgio Agamben, “Bartleby, or On Contingency,” in Potentialities: Collected Essays in Philosophy, trans. and ed. Daniel Heller-Roazen (Stanford: Stanford University Press, 1999), 243-71; and see the following by Slavoj Žižek: Iraq: The Borrowed Kettle (New York: Verso, 2004), esp. 71-73, and The Parallax View (New York: Verso, 2006), esp. 381-85.

    [21] Galloway, Laruelle, 220.

  • Good Wives: Algorithmic Architectures as Metabolization

    Good Wives: Algorithmic Architectures as Metabolization

    by Karen Gregory

    ~

    Text of a talk delivered at Digital Labor: Sweatshops, Picket Lines, and Barricade, New York, November 14th-16th, 2014.

    This talk has a few different starting points, which include a forum I held last March on Angela Mitropoulos’ work Contract and Contagion that explored the expansions and reconfigurations of capital, time, and work through the language of Oikonomics or the “properly productive household”, as well as the work that I was doing with Patricia Clough, Josh Scannell, and Benjamin Haber on a paper called “The Datalogical Turn”, which explores how the coupling of large scale databases and adaptive algorithms “are calling forth a new onto-logic of sociality or the social itself” as well as, I confess, no small share of binge-watching the TV show The Good Wife. So, please bear with me as I take you through my thinking here. What I am trying to do in my work of late is a form of feminist thinking that can take quite seriously not only the onto-sociality of data and the ways in which bodily practices are made to extend far and wide beyond the body, but a form of thinking that can also understand the paradox of our times: How and why has digital abundance been ushered in on the heels of massive income inequality and political dispossession? In some ways, the last part of that sentence (why inequality and political dispossession) is actually easier to account for than understanding the role that such “abundance” has played in the reconfiguration or transfers of wealth and power.

    So, let me back up her for a minute… Already in 1992, Deleuze wrote that a disciplinary society had give way to a control society. Writing, “we are in a generalized crisis in relation to all the environments of enclosure—prison, hospital, factory, school, family” and that “everyone knows that these institutions are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of the new forces knocking at the door. These are the societies of control, which are in the process of replacing the disciplinary societies.” For Deleuze, whereas the disciplinary man was a “discontinuous producer of energy, the man of control is undulatory, in orbit, in a continuous network.” For such a human, Deleuze wrote, “surfing” has “replaced older sports.”

    We know, despite Marx’s theorization of “dead labor”, that digital, networked infrastructures have been active, even “vital”, agents of this shift from discipline to control or the shift from a capitalism of production and property to a capitalism of dispersion, a capitalism fit for circulation, relay, response, and feedback. As Deleuze writes, this is a capitalism fit for a “higher order” of production. I want to intentionally play on the words “higher word”, with their invocations of a religiosity, faith, and hierarchy, because much of our theoretical work of late has been specifically developed to help us understand the ways in which such a “higher order” has been very successful in affectively reconfiguring and reformatting bodies and environments for its own purposes. We talk often of the modulation, pre-emption, extraction, and subsumption of elements once thought to be “immaterial” or spiritual, if you will, the some-“things” that lacked a full instantiation in the material world. I do understand that I am twisting Deleuze’s words here a bit (what he meant in the Postscript was a form of production that we now think as flexible production, production on demand, or JIT production), but my thinking here is that very notion of a higher order, a form of production considered progress in itself, has been very good at making us pray toward the light and at replacing the audial sensations of the church bell/factory clock with the blinding temporality of the speed of light itself. This blinding speed of light is related to what Marx called “circulation time,” or the annihilation of space through time, and it is this black hole of capital, this higher order of production and the ways in which we have theorized its metaphysics, which I want to argue, have become the Via Negativa to a Capital that transcends thought. What I mean here is that this form of theorizing has really left us with a capital beyond reproach, a capital reinstated in and through the effects of what it is not—it is not a wage, it is not found in commodities, it is not ultimately a substance humans have access or rights to…

    In such a rapture of the higher order of the light, there has been a tendency to look away from concepts such as “foundations” or “limits” or quaint theories of units such as the “household”, but in Angela Mitropoulos’ work Contract and Contagion we find those concepts as the heart of her reading of the collapse of the time of work into that of life. For Mitropoulos, it is through the performativity and probalistic terms of “the contract” (and not simply the contract of liberal sociality, but a contract as a terms of agreement to the “right” genealogical transfer of wealth) that we should visualize the flights of capital. This broadened notion of the contract is a necessary term for fully grasping what is being brought into being on the heels of “the datalogical turn.”

    For Mitropoulos, it is the contract, which she links to the oath, the promise, the covenant, the bargain, and even faith in general, that “transforms contingency into necessity.” Contracts’ “ensuing contractualism” has been “amplified as an ontological precept.” Here, contract is fundamentally a precept that transforms life into a game (and I don’t mean simply game-ifyed, but obviously we could talk about what gameification means for our sense of what is implied in contractual relations. Liberal contracts have tended to evoke their authority from the notion of autonomous and rational subjects—this is not exactly the same subject being invoked when you’re prompted to like every picture of a cat on the internet or have your attention directed to tiny little numbers in the corner of screen to see who faved your post, although those Facebook numbers are micro-contracts. One’s you haven’t signed up for exactly.) For Mitropoulos, it is not just that contracts transform life into contingency; it is that they transform life into a game that must be played out of necessity. Taking up Pascal’s wager Mitropoulos writes,

    the materiality of contractualism is that of a performativity installed by its presumption of the inexorable necessity of contingency; a presumption established by what I refer to here as the Pascalian premise that one must ‘play the game’ necessarily, that this is the only game available. This invalidates all idealist explanations of contract, including those which echo contractualism’s voluntarism in their understanding of (revolutionary) subjectivity. Performativity is the temporality of contract, and the temporal continuity of capitalism is uncertain.

    In other words, one has no choice but to gamble. God either exists or God does not exist. Both may be possible/virtual, but only one will be real/actual and it is via the wager that one must, out of necessity, come to understand God with and through contingency. It is through such wagering that the contract—as a form of measurable risk—comes into being. Measurable risk—measure and risk as entangled in speculation— became, we might say, the Via Affirmativa of early and industrializing capital.

    This transmutation of contingency into measure sits not only at the heart the contract, but is as Mitropoulos writes, “crucial to the legitimatized forms of subjectivity and relation that have accompanied the rise and expansion of capitalism across the world.” Yet, in addition to the historical project of situating an authorial, egalitarian, liberal, willful, and autonomous subject as a universal subject, contract is also interested in something that looks much more like geometric, matrixial, spatializing, and impersonal. Contract does not solely care about “subject formation”, but also the development of positions that compose a matrix— so that the matrix is made to be an engine of production and circulation. It is interested in the creation of an infrastructure of contracts, or points of contact that reconfigure a “divine” order in the face of contingency.

    The production of such a divine order is what Mitropolous will link back to Oikonomia or the economics of the household, whereby bodies are parsed both spatially and socially into those who may enter into contract and those who may not. While contract becomes increasingly a narrow domain of human relations, Oikonomia is the intentional distribution and classification of bodies—humans, animal, mineral— to ensure the “proper” (i.e. moral, economic, and political) functioning of the household, which functions like molar node within the larger matrix. Given that contingency has been installed as the game that must be played, contract then comes to enforces a chain of being predicated on forms of naturalized servitude and obligation to the game. These are forms of naturalized servitude that are simultaneously built into the architecture of the household, as well as made invisible. As Anne Boyer has written in regard to the Greek household it, probably looked like this:

    In the front of the household were the women’s rooms—the gynaikonitis. Behind these were the common areas and the living quarters for the men—the andronitis. It was there one could find the libraries. The men’s area, along with the household, was also wherever was outside of the household—that is, the free man’s area was the oikos and the polis and was the world. The oikos was always at least a double space, and doubly perceived, just as what is outside of it was always a singular territory on which slaves and women trespassed. The singular nature of the outside was enforced by violence or the threat of it. The free men’s home was the women’s factory; also—for women and slaves—their factory was a home on its knees.

    This is not simply a division of labor, but as Boyer writes, “God made of women an indoor body, and made of men an outdoor one. And this scheme—what becomes, in future iterations, public and private, of production and reproduction, of waged work and unpaid servitude—is the order agreed upon to attend to the risk posed by those who make the oikos.”

    This is the order that we believe has given way as Fordism morphed into Post-Fordism and as the walls of these architectures have been smoothed by the flows of endlessly circulated, derivative, financialized capital. Yet, what Mitropoulos’ work points us toward is the persistence of the contract. Walls may crumble, but the foundations of contract re-instantiate, if not proliferate, in the wake of capital’s discovery of new terrains. The gynaikonitis with its function to parse and delineate the labor of the household into a hierarchy of care work—from the wifely householding of management to the slave-like labor of “being ready to hand”— does not simply evaporate, but rather finds new instantiations among the flights of capital and new instantiations within its very infrastructure. Following Mitropoulos, we can argue that while certain forms of disciplinary seemingly come to an end, there is no shift to control without a proliferating matrix of contract whose function is to re-impose the very meaning—or rather, the very ontological necessity, of measure. It is through the persistent re-imposition of measure that a logic of the Oikos is never lost, ensuring—despite new configurations of capital—the genealogical transfer of wealth and the fundamentally dispossessing relations of servitude.

    Let me shift a gear here ever so slightly and enter Alicia Florrick. Alicia is “The Good Wife”, who many of you know from the TV show of the same name. She is the white fantasy super-hero and upper middle class working mother and ruthless lawyer who has successfully exploded onto the job market after years of raising her children and who is not only capable of leaning in after all those years, but of taking command of her own law firm and running for political office. Alicia is a “good wife” not solely because she has stood beside her philandering politician husband, but because as a white, upper-class mother and lawyer, she is nonetheless responsible for the utmost of feminized and invisible labor—that of (re)producing the very conditions of sociality. Her “womanly” or “wife-ish” goodness is predicated on her ability to transform what are essentially, in the show, a series of shitty experiences and shitty conditions, into conditions of possibility and potential. Alicia works endlessly, tirelessly (Does she ever sleep?) to find new avenues of possibility and configurations of the law in order to create a very specific form of “liberal” order and organization, believing as she does in the “power of rules” (in distinction to her religious daughter, a necessary trope used to highlight the fundamentally “moral” underpinning of secular order.)

    While the show is incredibly popular, no doubt because viewers desire to identify with Alicia’s capacity for labor and domination, to me the show is less about a real or even possible human figure than it is about a “good wife” and the social function that such a wife plays. In Oikonomic logic, a good wife is essential to the maintenance of contract because she is what metabolizes the worlds of inner and outer, simultaneously managing the inner domestic world of care within while parsing or keeping distinct its contagion from the outer world of contract. That Alicia is white, heternormative, upper middle class, as well as upwardly mobile and legally powerful is essential to aligning her with the power of contract, yet her work is fundamentally that of parsing contagions to the system. Prison bodies and prison as a site of the “general population” haunt the show as though we are meant to forget that Alicia’s labor and its value are predicated on the existence of space beyond contract—a space of being removed from visibility. The figure of the good wife therefore not only operates as a shared boundary, but reproduces the distinctions between contractable relations and invisible, obligated labor or what I will call metabolization. Our increasing digitized, datafied, networked, and surveilled world is fully populated by such good wives. We call them interfaces. But they should also be seen as a proliferation of contracts, which are rewriting the nature of who and what may participate.

    I would like to argue that good wives—or interfaces—and their necessary shadow world of obligated labor are useful frameworks for understanding the paradox I mentioned when I first began: how and why has digital abundance been ushered on the heels of massive income inequality and political dispossession? In the logic of the Oikos, the good wife of the interface stands in both contradistinction and harmony with the metabolizing labor of the system she manages, which is comprised of those specifically removed from “the labor” relation— domestic workers, care workers, prisoner laborers—those who must be “present” yet without recognition. The interface stands in both contradistinction and harmony with the algorithm that is made to be present and made to adapt. I want to argue that the “marriage” of the proliferation of interfaces and with the ubiquitous, and adaptive computation of digital algorithms is an Oikonomic infrastructure. It is a proliferation of contracts meant to insure that the “contagion” of the algorithm, which I explore in a moment, remain “black boxed” or removed from visibility, while nonetheless ensuring that such contagious invisible work shore up the power of contract and its ability to redirect capital along genealogical lines. While Piketty doesn’t uses the language of the Oikos, we might read the arrival of his work as a confirmation that we are in a moment re-establishing such a “household logic”—an expansion of capital that comes with quite a new foundation of the transfer of wealth.

    While the good wife or interface is a boundary, which borrowing from Celia Lury, that marks a frame for the simultaneous capture and redeployment of data, it is the digital algorithm that undergirds or makes possible the interfaces’ ontological authority to “measure.” However, algorithms, if we follow Luciana Parisi are not simple executing a string of code, not simply providing the interface with a “measure” of an existing world. Rather, algorithms are, as Luciana Parisi writes in her work on contagious architecture, performing entities that are “not simply representations of data, but are occasions of experience insofar as they prehend information in their own way.” Here Parisi is ascribing to the algorithm a Whiteheadian ontology of process, which sees the algorithm as its own spatio-temporal entity capable of grasping, including, or excluding data. Prehension implies not so much a choice, but a relation of allure by which all entities (not only algorithms) call one another into being, or come into being as events or what Whitehead calls “occasions of experience.” For Parisi, via Whitehead, the algorithm is no longer simply a tool to accomplish a task, but an “actuality, defined by an automated prehension of data in the computational processing of probability.”

    greek wedding
    Wedding in Ancient Greece. image source

    Much like the good wife of the Greek household, who must manage and organize—but is nonetheless dependent on— the contagious (and therefore made to be invisible) domestic labor of servants and slave, the good wife of the interface manages and organizes the prehensive capacities of the algorithm, which are then misrecognized as simply “doing their job” or executing their code in a divine order of being. However, if we follow Parisi, prehension does not simply imply the direct “reproduction of that which is prehended”, rather prehension should be understood itself be understood as a “contagion.” Writing, “infinite amounts of data irreversibly enter and determine the function of algorithmic procedures. It follows that contagion describes the immanence of randomness in programming.” This contagion, for Parisi, means that “algorithmic prehensions are quantifications of infinite qualities that produce new qualities.” Rather than simply “doing their job”, as it were, algorithms are fundamentally generative. They are, for Parisi, producing not only new digital spaces, but also programmed architectural forms and urban infrastructures that “expose us to new mode of living, but new modes of thinking.” Algorithms are metabolizing a world of infinite and incomputable data that is then mistaken by the interfaces as a “measure” of that world—a measure that can not only stand in for contract, but can give rise to a proliferation of micro contracts that populate the circulations of sociality.

    Control then, if we can return to that idea, has come not simply about as an undulation or a demise of discipline, but through an architecture of metabolization and measure that has never disavowed the function of contract. It is, in fact, an architecture quite successful at re-writing the very terms of contract arrangements. Algorithmic architectures may no longer seek to maintain the walls of the household, but they are nonetheless in the rapid production of an Oikos all the same.


    _____

    Karen Gregory (@claudiakincaid) is the Title V Lecturer in Sociology in the Department of Interdisciplinary Arts and Sciences/Center for Worker Education at the City College of New York, where she is also the faculty head of City Lab. Her work explores the intersection of digital labor, affect, and contemporary spirituality, with an emphasis on the role of the laboring body. Karen is a founding member of CUNY Graduate Center’s Digital Labor Working Group and her writings have appeared in Women’s Studies Quarterly, Women and Performance, Visual Studies, Contexts, The New Inquiry, and Dis Magazine.

    Back to the essay

  • Dissecting the “Internet Freedom” Agenda

    Dissecting the “Internet Freedom” Agenda

    Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedoma review of Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedom  (University of Illinois Press, 2015)
    by Richard Hill
    ~
    Disclosure: the author of this review is thanked in the Preface of the book under review.

    Both radical civil society organizations and mainstream defenders of the status quo agree that the free and open Internet is threatened: see for example the Delhi Declaration, Bob Hinden’s 2014 Year End Thoughts, and Kathy Brown’s March 2015 statement at a UNESCO conference. The threats include government censorship and mass surveillance, but also the failure of governments to control rampant industry concentration and commercial exploitation of personal data, which increasingly takes the form of providing “free” services in exchange for personal information that is resold at a profit, or used to provide targeted advertising, also at a profit.

    In Digital Disconnect, Robert McChesney has explained how the Internet, which was supposed to be a force for the improvement of human rights and living conditions, has been used to erode privacy and to increase the concentration of economic power, to the point where it is becoming a threat to democracy. In Digital Depression, Dan Schiller has documented how US policies regarding the Internet have favored its geo-economic and geo-political goals, in particular the interests of its large private companies that dominate the information and communications technology (ICT) sector worldwide.

    Shawn M. Powers and Michael Jablonski’s seminal new book The Real Cyber War takes us further down the road of understanding what went wrong, and what might be done to correct the situation. Powers, an assistant professor at Georgia State University, specializes in international political communication, with particular attention to the geopolitics of information and information technologies. Jablonski is an attorney and presidential fellow, also at Georgia State.

    There is a vast literature on internet governance (see for example the bibliography in Radu, Chenou, and Weber, eds., The Evolution of Global Internet Governance), but much of it is ideological and normative: the author espouses a certain point of view, explains why that point of view is good, and proposes actions that would lead to the author’s desired outcome (a good example is Milton Mueller’s well researched but utopian Networks and States). There is nothing wrong with that approach: on the contrary, such advocacy is necessary and welcome.

    But a more detached analytical approach is also needed, and Powers and Jablonski provide exactly that. Their objective is to help us understand (citing from p. 19 of the paperback edition) “why states pursue the policies they do”. The book “focuses centrally on understanding the numerous ways in which power and control are exerted in cyberspace” (p. 19).

    Starting from the rather obvious premise that states compete to shape international policies that favor their interests, and using the framework of political economy, the authors outline the geopolitical stakes and show how questions of power, and not human rights, are the real drivers of much of the debate about Internet governance. They show how the United States has deliberately used a human rights discourse to promote policies that further its geo-economic and geo-political interests. And how it has used subsidies and government contracts to help its private companies to acquire or maintain dominant positions in much of the ICT sector.

    Jacob Silverman has decried the “the misguided belief that once power is arrogated away from doddering governmental institutions, it will somehow find itself in the hands of ordinary people”. Powers and Jablonski dissect the mechanisms by which vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.

    In particular, they show how a “freedom to connect” narrative is used by the USA to attempt to transform information and personal data into commercial commodities that should be subject to free trade. Yet all states (including the US) regulate, at least to some extent, the flow of information within and across their borders. If information is the “new oil” of our times, then it is not surprising that states wish to shape the production and flow of information in ways that favor their interests. Thus it is not surprising that states such as China, India, and Russia have started to assert sovereign rights to control some aspect of the production and flow of information within their borders, and that European Union courts have made decisions on the basis of European law that affect global information flows and access.

    As the authors put the matter (p. 6): “the [US] doctrine of internet freedom … is the realization of a broader [US] strategy promoting a particular conception of networked communication that depends on American companies …, supports Western norms …, and promotes Western products.” (I would personally say that it actually supports US norms and US products and services.) As the authors point out, one can ask (p. 11): “If states have a right to control the types of people allowed into their territory (immigration), and how its money is exchanged with foreign banks, then why don’t they have a right to control information flows from foreign actors?”

    To be sure, any such controls would have to comply with international human rights law. But the current US policies go much further, implying that those human rights laws must be implemented in accordance with the US interpretation, meaning few restrictions on freedom of speech, weak protection of privacy, and ever stricter protection for intellectual property. As Powers and Jablonski point out (p. 31), the US does not hesitate to promote restrictions on information flows when that promotes its goals.

    Again, the authors do not make value judgments: they explain in Chapter 1 how the US deliberately attempts to shape (to a large extent successfully) international policies, so that both actions and inactions serve its interests and those of the large corporations that increasingly influence US policies.

    The authors then explain how the US military-industrial complex has morphed into an information-industrial complex, with deleterious consequences for both industry and government, consequences such as “weakened oversight, accountability, and industry vitality and competitiveness”(p. 23) that create risks for society and democracy. As the authors say, the shift “from adversarial to cooperative and laissez-faire rule making is a keystone moment in the rise of the information-industrial complex” (p. 61).

    As a specific example, they focus on Google, showing how it (largely successfully) aims to control and dominate all aspects of the data market, from production, through extraction, refinement, infrastructure and demand. A chapter is devoted to the economics of internet connectivity, showing how US internet policy is basically about getting the largest number of people online, so that US companies can extract ever greater profits from the resulting data flows. They show how the network effects, economies of scale, and externalities that are fundamental features of the internet favor first-movers, which are mostly US companies.

    The remedy to such situations is well known: government intervention: widely accepted regarding air transport, road transport, pharmaceuticals, etc., and yet unthinkable for many regarding the internet. But why? As the authors put the matter (p. 24): “While heavy-handed government controls over the internet should be resisted, so should a system whereby internet connectivity requires the systematic transfer of wealth from the developing world to the developed.” But freedom of information is put forward to justify specific economic practices which would not be easy to justify otherwise, for example “no government taxes companies for data extraction or for data imports/exports, both of which are heavily regulated aspects of markets exchanging other valuable commodities”(p. 97).

    The authors show in detail how the so-called internet multi-stakeholder model of governance is dominated by insiders and used “under the veil of consensus’” (p. 136) to further US policies and corporations. A chapter is devoted to explaining how all states control, at least to some extent, information flows within their territories, and presents detailed studies of how four states (China, Egypt, Iran and the USA) have addressed the challenges of maintaining political control while respecting (or not) freedom of speech. The authors then turn to the very current topic of mass surveillance, and its relation to anonymity, showing how, when the US presents the internet and “freedom to connect” as analogous to public speech and town halls, it is deliberately arguing against anonymity and against privacy – and this of course in order to avoid restrictions on its mass surveillance activities.

    Thus the authors posit that there are tensions between the US call for “internet freedom” and other states’ calls for “information sovereignty”, and analyze the 2012 World Conference on International Telecommunications from that point of view.

    Not surprisingly, the authors conclude that international cooperation, recognizing the legitimate aspirations of all the world’s peoples, is the only proper way forward. As the authors put the matter (p. 206): “Activists and defenders of the original vision of the Web as a ‘fair and humane’ cyber-civilization need to avoid lofty ‘internet freedom’ declarations and instead champion specific reforms required to protect the values and practices they hold dear.” And it is with that in mind, as a counterweight to US and US-based corporate power, that a group of civil society organizations have launched the Internet Social Forum.

    Anybody who is seriously interested in the evolution of internet governance and its impact on society and democracy will enjoy reading this well researched book and its clear exposition of key facts. One can only hope that the Council of Europe will heed Powers and Jablonski’s advice and avoid adopting more resolutions such as the recent recommendation to member states by the EU Committee of Ministers, which merely pander to the US discourse and US power that Powers and Jablonski describe so aptly. And one can fondly hope that this book will help to inspire a change in course that will restore the internet to what it might become (and what many thought it was supposed to be): an engine for democracy and social and economic progress, justice, and equity.
    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • The Internet vs. Democracy

    The Internet vs. Democracy

    Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracya review of Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy  (The New Press, 2014)
    by Richard Hill
    ~
    Many of us have noticed that much of the news we read is the same, no matter which newspaper or web site we consult: they all seem to be recycling the same agency feeds. To understand why this is happening, there are few better analyses than the one developed by media scholar Robert McChesney in his most recent book, Digital Disconnect. McChesney is a Professor in the Department of Communication at the University of Illinois at Urbana-Champaign, specializing in the history and political economy of communications. He is the author or co-author of more than 20 books, among the best-known of which are The Endless Crisis: How Monopoly-Finance Capital Produces Stagnation and Upheaval from the USA to China (with John Bellamy Foster, 2012), The Political Economy of Media: Enduring Issues, Emerging Dilemmas (2008), Communication Revolution: Critical Junctures and the Future of Media (2007), and Rich Media, Poor Democracy: Communication Politics in Dubious Times (1999), and is co-founder of Free Press.

    Many see the internet as a powerful force for improvement of human rights, living conditions, the economy, rights of minorities, etc. And indeed, like many communications technologies, the internet has the potential to facilitate social improvements. But in reality the internet has recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities.

    One might have expected that democracies would have harnessed the internet to serve the interests of their citizens, as they largely did with other technologies such as roads, telegraphy, telephony, air transport, pharmaceuticals (even if they used these to serve only the interests of their own citizens and not the general interests of mankind).

    But this does not appear to be the case with respect to the internet: it is used largely to serve the interests of a few very wealthy individuals, or certain geo-economic and geo-political interests. As McChesney puts the matter: “It is supremely ironic that the internet, the much-ballyhooed champion of increased consumer power and cutthroat competition, has become one of the greatest generators of monopoly in economic history” (131 in the print edition). This trend to use technology to favor special interests, not the general interest, is not unique to the internet. As Josep Ramoneda puts the matter: “We expected that governments would submit markets to democracy and it turns out that what they do is adapt democracy to markets, that is, empty it little by little.”

    McChesney’s book explains why this is the case: despite its great promise and potential to increase democracy, various factors have turned the internet into a force that is actually destructive to democracy, and that favors special interests.

    McChesney reminds us what democracy is, citing Aristotle (53): “Democracy [is] when the indigent, and not the men of property are the rulers. If liberty and equality … are chiefly to be found in democracy, they will be best attained when all persons alike share in the government to the utmost.”

    He also cites US President Lincoln’s 1861 warning against despotism (55): “the effort to place capital on an equal footing with, if not above, labor in the structure of government.” According to McChesney, it was imperative for Lincoln that the wealthy not be permitted to have undue influence over the government.

    Yet what we see today in the internet is concentrated wealth in the form of large private companies that exert increasing influence over public policy matters, going to so far as to call openly for governance systems in which they have equal decision-making rights with the elected representatives of the people. Current internet governance mechanisms are celebrated as paragons of success, whereas in fact they have not been successful in achieving the social promise of the internet. And it has even been said that such systems need not be democratic.

    What sense does it make for the technology that was supposed to facilitate democracy to be governed in ways that are not democratic? It makes business sense, of course, in the sense of maximizing profits for shareholders.

    McChesney explains how profit-maximization in the excessively laissez-faire regime that is commonly called neoliberalism has resulted in increasing concentration of power and wealth, social inequality and, worse, erosion of the press, leading to erosion of democracy. Nowhere is this more clearly seen than in the US, which is the focus of McChesney’s book. Not only has the internet eroded democracy in the US, it is used by the US to further its geo-political goals; and, adding insult to injury, it is promoted as a means of furthering democracy. Of course it could and should do so, but unfortunately it does not, as McChesney explains.

    The book starts by noting the importance of the digital revolution and by summarizing the views of those who see it as an engine of good (the celebrants) versus those who point out its limitations and some of its negative effects (the skeptics). McChesney correctly notes that a proper analysis of the digital revolution must be grounded in political economy. Since the digital revolution is occurring in a capitalist system, it is necessarily conditioned by that system, and it necessarily influences that system.

    A chapter is devoted to explaining how and why capitalism does not equal democracy: on the contrary, capitalism can well erode democracy, the contemporary United States being a good example. To dig deeper into the issues, McChesney approaches the internet from the perspective of the political economy of communication. He shows how the internet has profoundly disrupted traditional media, and how, contrary to the rhetoric, it has reduced competition and choice – because the economies of scale and network effects of the new technologies inevitably favor concentration, to the point of creating natural monopolies (who is number two after Facebook? Or Twitter?).

    The book then documents how the initially non-commercial, publicly-subsidized internet was transformed into an eminently commercial, privately-owned capitalist institution, in the worst sense of “capitalist”: domination by large corporations, monopolistic markets, endless advertising, intense lobbying, and cronyism bordering on corruption.

    Having explained what happened in general, McChesney focuses on what happened to journalism and the media in particular. As we all know, it has been a disaster: nobody has yet found a viable business model for respectable online journalism. As McChesney correctly notes, vibrant journalism is a pre-condition for democracy: how can people make informed choices if they do not have access to valid information? The internet was supposed to broaden our sources of information. Sadly, it has not, for the reasons explained in detail in the book. Yet there is hope: McChesney provides concrete suggestions for how to deal with the issue, drawing on actual experiences in well functioning democracies in Europe.

    The book goes on to call for specific actions that would create a revolution in the digital revolution, bringing it back to its origins: by the people, for the people. McChesney’s proposed actions are consistent with those of certain civil society organizations, and will no doubt be taken up in the forthcoming Internet Social Forum, an initiative whose intent is precisely to revolutionize the digital revolution along the lines outlined by McChesney.

    Anybody who is aware of the many issues threatening the free and open internet, and democracy itself, will find much to reflect upon in Digital Disconnect, not just because of its well-researched and incisive analysis, but also because it provides concrete suggestions for how to address the issues.

    _____

    Richard Hill, an independent consultant based in Geneva, Switzerland, was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He frequently writes about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    a review of Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014)

    by Frank Pasquale

    ~

    Business futurism is a grim discipline. Workers must either adapt to the new economic realities, or be replaced by software. There is a “race between education and technology,” as two of Harvard’s most liberal economists insist. Managers should replace labor with machines that require neither breaks nor sick leave. Superstar talents can win outsize rewards in the new digital economy, as they now enjoy global reach, but they will replace thousands or millions of also-rans. Whatever can be automated, will be, as competitive pressures make fairly paid labor a luxury.

    Thankfully, Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age (2MA)  downplays these zero-sum tropes. Brynjolffson & McAfee (B&M) argue that the question of distribution of the gains from automation is just as important as the competitions for dominance it accelerates. 2MA invites readers to consider how societies will decide what type of bounty from automation they want, and what is wanted first.  The standard, supposedly neutral economic response (“whatever the people demand, via consumer sovereignty”) is unconvincing. As inequality accelerates, the top 5% (of income earners) do 35% of the consumption. The top 1% is responsible for an even more disproportionate share of investment. Its richest members can just as easily decide to accelerate the automation of the wealth defense industry as they can allocate money to robotic construction, transportation, or mining.

    A humane agenda for automation would prioritize innovations that complement (jobs that ought to be) fulfilling vocations, and substitute machines for dangerous or degrading work. Robotic meat-cutters make sense; robot day care is something to be far more cautious about. Most importantly, retarding automation that controls, stigmatizes, and cheats innocent people, or sets up arms races with zero productive gains, should be a much bigger part of public discussions of the role of machines and software in ordering human affairs.

    2MA may set the stage for such a human-centered automation agenda. Its diagnosis of the problem of rapid automation (described in Part I below) is compelling. Its normative principles (II) are eclectic and often humane. But its policy vision (III) is not up to the challenge of channeling and sequencing automation. This review offers an alternative, while acknowledging the prescience and insight of B&M’s work.

    I. Automation’s Discontents

    For B&M, the acceleration of automation ranks with the development of agriculture, or the industrial revolution, as one of the “big stories” of human history (10-12). They offer an account of the “bounty and spread” to come from automation. “Bounty” refers to the increasing “volume, variety, and velocity” of any imaginable service or good, thanks to its digital reproduction or simulation (via, say, 3-D printing or robots). “Spread” is “ever-bigger differences among people in economic success” that they believe to be just as much an “economic consequence” of automation as bounty.[1]

    2MA briskly describes various human workers recently replaced by computers.  The poor souls who once penned corporate earnings reports for newspapers? Some are now replaced by Narrative Science, which seamlessly integrates new data into ready-made templates (35). Concierges should watch out for Siri (65). Forecasters of all kinds (weather, home sales, stock prices) are being shoved aside by the verdicts of “big data” (68). “Quirky,” a startup, raised $90 million by splitting the work of making products between a “crowd” that “votes on submissions, conducts research, suggest improvements, names and brands products, and drives sales” (87), and Quirky itself, which “handles engineering, manufacturing, and distribution.” 3D printing might even disintermediate firms like Quirky (36).

    In short, 2MA presents a kaleidoscope of automation realities and opportunities. B&M skillfully describe the many ways automation both increases the “size of the pie,” economically, and concentrates the resulting bounty among the talented, the lucky, and the ruthless. B&M emphasize that automation is creeping up the value chain, potentially substituting machines for workers paid better than the average.

    What’s missing from the book are the new wave of conflicts that would arise if those at very top of the value chain (or, less charitably, the rent and tribute chain) were to be replaced by robots and algorithms. When BART workers went on strike, Silicon Valley worthies threatened to replace them with robots. But one could just as easily call for the venture capitalists to be replaced with algorithms. Indeed, one venture capital firm added an algorithm to its board in 2013.  Travis Kalanick, the CEO of Uber, responded to a question on driver wage demands by bringing up the prospect of robotic drivers. But given Uber’s multiple legal and PR fails in 2014, a robot would probably would have done a better job running the company than Kalanick.

    That’s not “crazy talk” of communistic visions along the lines of Marx’s “expropriate the expropriators,” or Chile’s failed Cybersyn.[2]  Thiel Fellow and computer programming prodigy Vitaly Bukherin has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.”[3] Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite. To attract and retain the support of investors, a firm must obtain certain results, and the short-run paths to attaining them (such as cutting wages, or financial engineering) are increasingly narrow.  And in today’s investment environment of rampant short-termism, the short is often the only term there is.

    In the long run, a secure firm can tolerate experiments. Little wonder, then, that the largest firm at the cutting edge of automation—Google—has a secure near-monopoly in search advertising in numerous markets. As Peter Thiel points out in his recent From Zero to One, today’s capitalism rewards the best monopolist, not the best competitor. Indeed, even the Department of Justice’s Antitrust Division appeared to agree with Thiel in its 1995 guidelines on antitrust enforcement in innovation markets. It viewed intellectual property as a good monopoly, the rightful reward to innovators for developing a uniquely effective process or product. And its partner in federal antitrust enforcement, the Federal Trade Commission, has been remarkably quiescent in response to emerging data monopolies.

    II. Propertizing Data

    For B&M, intellectual property—or, at least, the returns accruing to intellectual insight or labor—plays a critical role in legitimating inequalities arising out of advanced technologies.  They argue that “in the future, ideas will be the real scarce inputs in the world—scarcer than both labor and capital—and the few who provide good ideas will reap huge rewards.”[4] But many of the leading examples of profitable automation are not “ideas” per se, or even particularly ingenious algorithms. They are brute force feats of pattern recognition: for example, Google’s studying past patterns of clicks to see what search results, and what ads, are personalized to delight and persuade each of its hundreds of millions of users. The critical advantage there is the data, not the skill in working with it.[5] Google will demur, but if they were really confident, they’d license the data to other firms, confident that others couldn’t best their algorithmic prowess.  They don’t, because the data is their critical, self-reinforcing advantage. It is a commonplace in big data literatures to say that the more data one has, the more valuable any piece of it becomes—something Googlers would agree with, as long as antitrust authorities aren’t within earshot.

    As sensors become more powerful and ubiquitous, feats of automated service provision and manufacture become more easily imaginable.  The Baxter robot, for example, merely needs to have a trainer show it how to move in order to ape the trainer’s own job. (One is reminded of the stories of US workers flying to India to train their replacements how to do their job, back in the day when outsourcing was the threat du jour to U.S. living standards.)

    how to train a robot
    How to train a Baxter robot. Image source: Inc. 

    From direct physical interaction with a robot, it is a short step to, say, programmed holographic or data-driven programming.  For example, a surveillance camera on a worker could, after a period of days, months, or years, potentially record every movement or statement of the worker, and replicate it, in response to whatever stimuli led to the prior movements or statements of the worker.

    B&M appear to assume that such data will be owned by the corporations that monitor their own workers.  For example, McDonalds could train a camera on every cook and cashier, then download the contents into robotic replicas. But it’s just as easy to imagine a legal regime where, say, workers’ rights to the data describing their movements would be their property, and firms would need to negotiate to purchase the rights to it.  If dance movements can be copyrighted, so too can the sweeps and wipes of a janitor. Consider, too, that the extraordinary advances in translation accomplished by programs like Google Translate are in part based on translations by humans of United Nations’ documents released into the public domain.[6] Had the translators’ work not been covered by “work-made-for-hire” or similar doctrines, they might well have kept their copyrights, and shared in the bounty now enjoyed by Google.[7]

    Of course, the creativity of translation may be greater than that displayed by a janitor or cashier. Copyright purists might thus reason that the merger doctrine denies copyrightability to the one best way (or small suite of ways) of doing something, since the idea of the movement and its expression cannot be separated. Grant that, and one could still imagine privacy laws giving workers the right to negotiate over how, and how pervasively, they are watched. There are myriad legal regimes governing, in minute detail, how information flows and who has control over it.

    I do not mean to appropriate here Jaron Lanier’s ideas about micropayments, promising as they may be in areas like music or journalism. A CEO could find some critical mass of stockers or cooks or cashiers to mimic even if those at 99% of stores demanded royalties for the work (of) being watched. But the flexibility of legal regimes of credit, control, and compensation is under-recognized. Living in a world where employers can simply record everything their employees do, or Google can simply copy every website that fails to adopt “robots.txt” protection, is not inevitable. Indeed, according to renowned intellectual property scholar Oren Bracha, Google had to “stand copyright on its head” to win that default.[8]

    Thus B&M are wise to acknowledge the contestability of value in the contemporary economy.  For example, they build on the work of MIT economists Daron Acemoglu and David Autor to demonstrate that “skill biased technical change” is a misleading moniker for trends in wage levels.  The “tasks that machines can do better than humans” are not always “low-skill” ones (139). There is a fair amount of play in the joints in the sequencing of automation: sometimes highly skilled workers get replaced before those with a less complex and difficult-to-learn repertoire of abilities.  B&M also show that the bounty predictably achieved via automation could compensate the “losers” (of jobs or other functions in society) in the transition to a more fully computerized society. By seriously considering the possibility of a basic income (232), they evince a moral sensibility light years ahead of the “devil-take-the-hindmost” school of cyberlibertarianism.

    III. Proposals for Reform

    Unfortunately, some of B&M’s other ideas for addressing the possibility of mass unemployment in the wake of automation are less than convincing.  They praise platforms like Lyft for providing new opportunities for work (244), perhaps forgetting that, earlier in the book, they described the imminent arrival of the self-driving car (14-15). Of course, one can imagine decades of tiered driving, where the wealthy get self-driving cars first, and car-less masses turn to the scrambling drivers of Uber and Lyft to catch rides. But such a future seems more likely to end in a deflationary spiral than  sustainable growth and equitable distribution of purchasing power. Like the generation traumatized by the Great Depression, millions subjected to reverse auctions for their labor power, forced to price themselves ever lower to beat back the bids of the technologically unemployed, are not going to be in a mood to spend. Learned helplessness, retrenchment, and miserliness are just as likely a consequence as buoyant “re-skilling” and self-reinvention.

    Thus B&M’s optimism about what they call the “peer economy” of platform-arranged production is unconvincing.  A premier platform of digital labor matching—Amazon’s Mechanical Turk—has occasionally driven down the wage for “human intelligence tasks” to a penny each. Scholars like Trebor Scholz and Miriam Cherry have discussed the sociological and legal implications of platforms that try to disclaim all responsibility for labor law or other regulations. Lilly Irani’s important review of 2MA shows just how corrosive platform capitalism has become. “With workers hidden in the technology, programmers can treat [them] like bits of code and continue to think of themselves as builders, not managers,” she observes in a cutting aside on the self-image of many “maker” enthusiasts.

    The “sharing economy” is a glidepath to precarity, accelerating the same fate for labor in general as “music sharing services” sealed for most musicians. The lived experience of many “TaskRabbits,” which B&M boast about using to make charts for their book, cautions against reliance on disintermediation as a key to opportunity in the new digital economy. Sarah Kessler describes making $1.94 an hour labeling images for a researcher who put the task for bid on Mturk.  The median active TaskRabbit in her neighborhood made $120 a week; Kessler cleared $11 an hour on her best day.

    Resistance is building, and may create fairer terms online.  For example, Irani has helped develop a “Turkopticon” to help Turkers rate and rank employers on the site. Both Scholz and Mike Konczal have proposed worker cooperatives as feasible alternatives to Uber, offering drivers both a fairer share of revenues, and more say in their conditions of work. But for now, the peer economy, as organized by Silicon Valley and start-ups, is not an encouraging alternative to traditional employment. It may, in fact, be worse.

    Therefore, I hope B&M are serious when they say “Wild Ideas [are] Welcomed” (245), and mention the following:

    • Provide vouchers for basic necessities. . . .
    • Create a national mutual fund distributing the ownership of capital widely and perhaps inalienably, providing a dividend stream to all citizens and assuring the capital returns do not become too highly concentrated.
    • Depression-era Civilian Conservation Corps to clean up the environment, build infrastructure.

    Speaking of the non-automatable, we could add the Works Progress Administration (WPA) to the CCC suggestion above.  Revalue the arts properly, and the transition may even add to GDP.

    Soyer, Artists on the WPA
    Moses Soyer, “Artists on WPA” (1935). Image source: Smithsonian American Art Museum

    Unfortunately, B&M distance themselves from the ideas, saying, “we include them not necessarily to endorse them, but instead to spur further thinking about what kinds of interventions will be necessary as machines continue to race ahead” (246).  That is problematic, on at least two levels.

    First, a sophisticated discussion of capital should be at the core of an account of automation,  not its periphery. The authors are right to call for greater investment in education, infrastructure, and basic services, but they need a more sophisticated account of how that is to be arranged in an era when capital is extraordinarily concentrated, its owners have power over the political process, and most show little to no interest in long-term investment in the skills and abilities of the 99%. Even the purchasing power of the vast majority of consumers is of little import to those who can live off lightly taxed capital gains.

    Second, assuming that “machines continue to race ahead” is a dodge, a refusal to name the responsible parties running the machines.  Someone is designing and purchasing algorithms and robots. Illah Reza Nourbaksh’s Robot Futures suggests another metaphor:

    Today most nonspecialists have little say in charting the role that robots will play in our lives.  We are simply watching a new version of Star Wars scripted by research and business interests in real time, except that this script will become our actual world. . . . Familiar devices will become more aware, more interactive and more proactive; and entirely new robot creatures will share our spaces, public and private, physical and digital. . . .Eventually, we will need to read what they write, we will have to interact with them to conduct our business transactions, and we will often mediate our friendships through them.  We will even compete with them in sports, at jobs, and in business. [9]

    Nourbaksh nudges us closer to the truth, focusing on the competitive angle. But the “we” he describes is also inaccurate. There is a group that will never have to “compete” with robots at jobs or in business—rentiers. Too many of them are narrowly focused on how quickly they can replace needy workers with undemanding machines.

    For the rest of us, another question concerning automation is more appropriate: how much can we be stuck with? A black-card-toting bigshot will get the white glove treatment from AmEx; the rest are shunted into automated phone trees. An algorithm determines the shifts of retail and restaurant workers, oblivious to their needs for rest, a living wage, or time with their families.  Automated security guards, police, and prison guards are on the horizon. And for many of the “expelled,” the homines sacres, automation is a matter of life and death: drone technology can keep small planes on their tracks for hours, days, months—as long as it takes to execute orders.

    B&M focus on “brilliant technologies,” rather than the brutal or bumbling instances of automation.  It is fun to imagine a souped-up Roomba making the drudgery of housecleaning a thing of the past.  But domestic robots have been around since 2000, and the median wage-earner in the U.S. does not appear to be on a fast track to a Jetsons-style life of ease.[10] They are just as likely to be targeted by the algorithms of the everyday, as they are to be helped by them. Mysterious scoring systems routinely stigmatize persons, without them even knowing. They reflect the dark side of automation—and we are in the dark about them, given the protections that trade secrecy law affords their developers.

    IV. Conclusion

    Debates about robots and the workers “struggling to keep up” with them are becoming stereotyped and stale. There is the standard economic narrative of “skill-biased technical change,” which acts more as a tautological, post hoc, retrodictive, just-so story than a coherent explanation of how wages are actually shifting. There is cyberlibertarian cornucopianism, as Google’s Ray Kurzweil and Eric Schmidt promise there is nothing to fear from an automated future. There is dystopianism, whether intended as a self-preventing prophecy, or entertainment. Each side tends to talk past the other, taking for granted assumptions and values that its putative interlocutors reject out of hand.

    Set amidst this grim field, 2MA is a clear advance. B&M are attuned to possibilities for the near and far future, and write about each in accessible and insightful ways.  The authors of The Second Machine Age claim even more for it, billing it as a guide to epochal change in our economy. But it is better understood as the kind of “big idea” book that can name a social problem, underscore its magnitude, and still dodge the elaboration of solutions controversial enough to scare off celebrity blurbers.

    One of 2MA’s blurbers, Clayton Christensen, offers a backhanded compliment that exposes the core weakness of the book. “[L]earners and teachers alike are in a perpetual mode of catching up with what is possible. [The Second Machine Age] frames a future that is genuinely exciting!” gushes Christensen, eager to fold automation into his grand theory of disruption. Such a future may be exciting for someone like Christensen, a millionaire many times over who won’t lack for food, medical care, or housing if his forays fail. But most people do not want to be in “perpetually catching up” mode. They want secure and stable employment, a roof over their heads, decent health care and schooling, and some other accoutrements of middle class life. Meaning is found outside the economic sphere.

    Automation could help stabilize and cheapen the supply of necessities, giving more persons the time and space to enjoy pursuits of their own choosing. Or it could accelerate arms races of various kinds: for money, political power, armaments, spying, stock trading. As long as purchasing power alone—whether of persons or corporations—drives the scope and pace of automation, there is little hope that the “brilliant technologies” B&M describe will reliably lighten burdens that the average person experiences. They may just as easily entrench already great divides.

    All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers.  That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners.

    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His recent book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    [1] One can quibble with the idea of automation as necessarily entailing “bounty”—as Yves Smith has repeatedly demonstrated, computer systems can just as easily “crapify” a process once managed well by humans. Nor is “spread” a necessary consequence of automation; well-distributed tools could well counteract it. It is merely a predictable consequence, given current finance and business norms and laws.

    [2] For a definition of “crazy talk,” see Neil Postman, Stupid Talk, Crazy Talk: How We Defeat Ourselves by the Way We Talk and What to Do About It (Delacorte, 1976). For Postman, “stupid talk” can be corrected via facts, whereas “crazy talk” “establishes different purposes and functions than the ones we normally expect.” If we accept the premise of labor as a cost to be minimized, what better to cut than the compensation of the highest paid persons?

    [3] Conversation with Sam Frank at the Swiss Institute, Dec. 16, 2014, sponsored by Triple Canopy.

    [4] In Brynjolfsson, McAfee, and Michael Spence, “New World Order: Labor, Capital, and Ideas in the Power Law Economy,” an article promoting the book. Unfortunately, as with most statements in this vein, B&M&S give us little idea how to identify a “good idea” other than one that “reap[s] huge rewards”—a tautology all too common in economic and business writing.

    [5] Frank Pasquale, The Black Box Society (Harvard University Press, 2015).

    [6] Programs, both in the sense of particular software regimes, and the program of human and technical efforts to collect and analyze the translations that were the critical data enabling the writing of the software programs behind Google Translate.

    [9] Illah Reza Nourbaksh, Robot Futures (MIT Press, 2013), pp. xix-xx.

    [10] Erwin Prassler and Kazuhiro Kosuge, “Domestic Robotics,” in Bruno Siciliano and Oussama Khatib, eds., Springer Handbook of Robotics (Springer, 2008), p. 1258.

  • "Internet Freedom": Digital Empire?

    "Internet Freedom": Digital Empire?

    Dan Schiller, Digital Depression: Information Technology and Economic Crisisa review of Dan Schiller, Digital Depression: Information Technology and Economic Crisis  (University of Illinois Press, 2014)
    by Richard Hill
    ~
    Disclosure: the author of this review is mentioned in the Acknowledgements section of the reviewed book.

     

     

     

     

     

    Computers and telecommunications have revolutionized and disrupted all aspects of human activity, and even behavior. The impacts are broad and profound, with important consequences for governments, businesses, non-profit activities, and individuals. Networks of interconnected computer systems are driving many disruptive changes in business practices, information flows, and financial flows. Foremost amongst those networks is the Internet, much of which is global, or at least trans-national.

    According to some, the current governance arrangement for the Internet is nearly ideal. In particular, its global multi-stakeholder model of governance has resulted in a free and open Internet, which has enabled innovation and driven economic growth and well-being around the world. Others are of the view that things have not worked out that well. In particular, the Internet has resulted in mass surveillance by governments and by private companies, in monopolization, commodification and monetization of information and knowledge, in inequitable flows of finances between poor and rich countries, and in erosion of cultural diversity; further, those with central positions of influence have used it to consolidate power and to establish a new global regime of control and exploitation, under the guise of favoring liberalization, while in reality reinforcing the dominance and profitability of major corporations at the expense of the public interest, and the overarching position of certain national interests at the expense of global interests and well being.  [1]

    Dan Schiller’s book helps us to understand how rational and well-informed people can hold such diametrically opposing views. Schiller dissects the history of the growth of recent telecommunications networks and shows how they have significantly (indeed, dramatically) affected economic and political power relations around the world. And how, at the same time, US policies have consistently favored capital over labor, and have resulted in transfers of vast sums from developing countries to developed countries (in particular through interest on loans).

    2013 Berlin PRISM Demonstrations
    Participants wearing Edward Snowden and Chelsea Manning masks at 2013 Berlin protests against NSA PRISM program (image source: Wikipedia)

    Schiller documents in some detail how US policies that ostensibly promote the free flow of information around the world, the right of all people to connect to the Internet, and free speech, are in reality policies that have, by design, furthered the geo-economic and geo-political goals of the US, including its military goals, its imperialist tendencies, and the interests of large private companies based (if not always headquartered, at least for tax purposes) in the US. For example, strict copyright protection is held to be consistent with the free flow of information, as is mass surveillance. Cookies and exploitation of users’ personal data by Internet companies are held to be consistent with privacy rights (indeed, as Schiller shows, the US essentially denies the existence of the right to personal privacy for anything related to the Internet). There should be no requirements that data be stored locally, lest it escape the jurisdiction of the US surveillance apparatus. And very high profits and dominant positions in key Internet markets do not spark anti-trust or competition law investigations, as they might in any other industry.

    As Schiller notes, great powers have historically used communication systems to further their economic and strategic interests, so why should the US not so use the Internet? Thus stated, the matter seems obvious. But the matter is rarely thus stated. On the contrary, the Internet is often touted as a generous gift to the world’s people, able to lift them out of poverty and oppression, and to bring them the benefits of democracy and (or) free markets. Schiller’s carefully researched analysis is thus an important contribution.

    Schiller provides context by tracing the origins of the current financial and economic crises, pointing out that it is paradoxical that growing investments in Information and Communication Technologies (ICTs), and the supposed resultant productivity gains, did not prevent a major global economic crisis. Schiller explains how transnational corporations demanded liberalization of the terms on which they could use their private networks, and received then, resulting in profound changes in commodity chains, that is, the flow of production of goods and services. In particular, there has been an increase in transnational production, and this has reinforced the importance of transnational corporations. Further, ICTs have changed the nature of labor’s contribution to production, enabling many tasks to be shifted to unskilled workers (or even to consumers themselves: automatic teller machines (ATMs), for example, turn each of us into a bank clerk). However, the growth of the Internet did not transcend the regular economy: on the contrary, it was wrapped into the economy’s crisis tendencies and even exacerbated them.

    Schiller gives detailed accounts of these transformations in the automotive and financial industries, and in the military. The study of the effect of ICTs on the military is of particular interest considering that the Internet was originally developed as a military project, and that it is currently used by US intelligence agencies as a prime medium for the collection of information.

    Schiller then turns to telecommunications, explaining the very significant changes that took place in the USA starting in the late 1970s. Those changes resulted in a major restructuring of the dominant telecommunications playing field in the US and ultimately led to the growth of the Internet, a development which had world-wide effects. Schiller carefully describes the various US government actions that initiated and nurtured those changes, and that were instrumental in exporting similar changes to the rest of the world.

    Next, he analyzes how those changes affected and enabled the production of the networks themselves, the hardware used to build the networks and to use them (e.g. smartphones), and the software and applications that we all use today.

    Moving further up the value chain, Schiller explains how data-mining, coupled with advertising, fuels the growth of the dominant Internet companies, and how this data-mining is made possible only by denying data privacy, and how states use the very same techniques to implement mass surveillance.

    Having described the situation, Schiller proceeds to analyze it from economic and political perspectives. Given that the US was an early adopter of the Internet, it is not surprising that, because of economies of scale and network effects, US companies dominate the field (except in China, as Schiller explains in detail). Schiller describes how, given the influence of US companies on US politics, US policies, both domestic and foreign, are geared to allowing, or in fact favoring, ever-increasing concentration in key Internet markets, which is to the advantage of the US and its private companies–and despite the easy cant about decentralization and democratization.

    The book describes how the US views the Internet as an extraterritorial domain, subject to no authority except that of the US government and that of the dominant US companies. Each dictates its own law in specific spheres (for example, the US government has supervised, up to now, the management of Internet domain names and addresses; while US companies dictate unilateral terms and conditions to their users, terms and conditions that imply that users give up essentially all rights to their private data).

    Schiller describes how this state of affairs has become a foreign policy objective, with the US being willing to incur significant criticism and to pay a significant political price in order to maintain the status quo. That status quo is referred to as “the multi-stakeholder model”, in which private companies are essentially given veto power over government decisions (or at least over the decisions of any government other than the US government), a system that can be referred to as “corporatism”. Not only does the US staunchly defend that model for the Internet, it even tries to export it to other fields of human activity. And this despite, or perhaps because, that system allows companies to make profits when possible (in particular by exploiting state-built infrastructure or guarantees), and to transfer losses to states when necessary (as for example happened with the banking crisis).

    Schiller carefully documents how code words such as “freedom of access” and “freedom of speech” are used to justify and promote policies that in fact merely serve the interests of major US companies and, at the same time, the interests of the US surveillance apparatus, which morphed from a cottage industry into a major component of the military-industrial complex thanks to the Internet. He shows how the supposed open participation in key bodies (such as the Internet Engineering Task Force) is actually a screen to mask the fact that decisions are heavily influenced by insiders affiliated with US companies and/or the US government, and by agencies bound to the US as a state.

    As Schiller explains, this increasing dominance of US business and US political imperialism have not gone unchallenged, even if the challenges to date have mostly been rhetorical (again, except for China). Conflicts over Internet governance are related to rivalries between competing geo-political and geo-economic blocks, rivalries which will likely increase if economic growth continues to be weak. The rivalries are both between nations and within nations, and some are only emerging right now (for example, how to tax the digital economy, or the apparent emerging divergence of views between key US companies and the US government regarding mass surveillance).

    Indeed, the book explains how the challenges to US dominance have become more serious in the wake of the Snowden revelations, which have resulted in a significant loss of market share for some of the key US players, in particular with respect to cloud computing services. Those losses may have begun to drive the tip of a wedge between the so-far congruent goals of US companies and the US government

    In a nutshell, one can sum up what Schiller describes by paraphrasing Marx: “Capitalists of the world, unite! You have nothing to lose but the chains of government regulation.” But, as Schiller hints in his closing chapter, the story is still unfolding, and just as things did not work out as Marx thought they would, so things may not work out as the forces that currently dominate the Internet wish they will. So the slogan for the future might well be “Internet users of the world, unite! You have nothing to lose but the chains of exploitation of your personal data.”

    This book, and its extensive references, will be a valuable reference work for all future research in this area. And surely there will be much future research, and many more historical analyses of what may well be some of the key turning points in the history of mankind: the transition from the industrial era to the information era and the disruptions induced by that transition.

    _____

    Richard Hill, an independent consultant based in Geneva, Switzerland, was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). An earlier version of this review first appeared on Newsclick.

    Back to the essay
    _____

    1. From item 11 of document WSIS+10/4/6 of the preparatory process for the WSIS+10 High Level Event, which provided “a special platform for high-ranking officials of WSIS (World Summit on the Information Society) stakeholders, government, private sector, civil society and international organizations to express their views on the achievements, challenges and recommendations on the implementation” of various earlier internet governance initiatives backed by the International Telecommunications Union (ITU), the United Nations specialized agency for information and communications technologies, and other participants in the global internet governance sphere.

    Back to the essay