a review of Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg, eds., Ubiquitous Computing, Complexity, and Culture (Routledge 2016)
by Quinn DuPont
~
It is a truism today that digital technologies are ubiquitous in Western society (and increasingly so for the rest of the globe). With this ubiquity, it seems, comes complexity. This is the gambit of Ubiquitous Computing, Complexity, and Culture (Routledge 2016), a new volume edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, and Maria Engberg.
There are of course many ways to approach such a large and important topic: from the study of political economy, technology (sometimes leaning towards technological determinism or instrumentalism), discourse and rhetoric, globalization, or art and media. This collection focuses on art and media. In fact, only a small fraction of the chapters do not deal either entirely or mostly with art, art practices, and artists. Similarly, the volume includes a significant number of interviews with artists (six out of the forty-three chapters and editorial introductions). This focus on art and media is both the volume’s strength, and one of its major weaknesses.
By focusing on art, Ubiquitous Computing, Complexity, and Culture pushes the bounds of how we might commonly understand contemporary technology practice and development. For example, in their chapter, Dietmar Offenhuber and Orkan Telhan develop a framework for understanding, and potentially deploying, indexical visualizations for complex interfaces. Offenhuber and Telhan use James Turrell’s art installation Meeting as an example of the conceptual shortening of causal distance between object and representation, as a kind of Peircean index, and one such way to think about systems of representation. Another example of theirs, Natalie Jermijenko’s One Trees installation of one hundred cloned trees, strengthens and complicates the idea of the causal index, since the trees are from identical genetic stock, yet develop in natural and different ways. The uniqueness of the fully-grown trees is a literal “visualization” of their different environments, not unlike a seismograph, a characteristic indexical visualization technology. From these examples, Offenhuber and Telhan conclude that indexical visualizations may offer a fruitful “set of constraints” (300) that the information designer might draw on when developing new interfaces that deal with massive complexity. Many other examples and interrogations of art and art practices throughout the chapters offer unexpected and penetrating analysis into facets of ubiquitous and complex technologies.
A persistent challenge with art and media analyses of digital technology and computing, however, is that the familiar and convenient epistemological orientation, and the ready comparisons that result, are often to film, cinema, and theater. Studies reliant on this epistemology tend to make a range of interesting yet ultimately illusory observations, which fail to explain the richness and uniqueness of modern information technologies. In my opinion, there are many important ways that film, cinema, and theater are simply not like modern digital technologies. Such an epistemological orientation is, arguably, a consequence of the history of disciplinary allegiances—symptomatic of digital studies and new media studies originating from screen studies—and a proximate cause of Lev Manovich’s agenda-setting Language of New Media (2001), which relished in the mimetic connections resulting from the historical quirk that the most obvious computing technologies tend to have screens.
Because of this orientation, some of the chapters fail to critically engage with technologies, events, and practices largely affecting lived society. A very good artwork may go a long way to exposing social and political activities that might otherwise be invisible or known only to specialists, but it is the role of the critic and the academic to concretize these activities, and draw thick connections between art and “conventional” social issues. Concrete specificity, while avoiding reductionist traps, is the key to avoiding what amounts to belated criticism.
This specificity about social issues might come in the form of engagement with normative aspects of ubiquitous and complex digital technologies. Instead of explaining why surveillance is a feature of modern life (as several chapters do, which is, by now, well-worn academic ground), it might be more useful to ask why consumers and policy-makers alike have turned so quickly to privacy-enhancing technologies as a solution (to be sold by the high-technology industry). In a similar vein, unsexy aspects of wearable technologies (accessibility) now offer potential assistance and perceptual, physical, or cognitive enhancement (as described in Ellis and Goggin’s chapter), alongside unprecedented surveillance and monetization opportunities. Digital infrastructures—both active and failing—now drive a great deal of modern society, but despite their ubiquity, they are hard to see, and therefore, tend not to get much attention. These kinds of banal and invisible—ubiquitous—cases tend not to be captured in the boundary-pushing work of artists, and are underrepresented (though not entirely absent) in the analyses here.
A number of chapters also trade on old canards, such as worrying about information overload, “junk” data whizzing across the Internet, time “wasted” online, online narcissism, business models based on solely on data collection, and “declining” privacy. To the extent that any of these things are empirically true—when viewed contextually and precisely—is somewhat beside the point if we are not offered new analyses or solutions. Otherwise, these kinds of criticisms run the risk of sounding like old people nostalgically complaining about an imagined world before technological or informational ubiquity and complexity. “Traditional” human values might be an important form of study, but not as the pile-on Left-leaning liberal romanticism prevalent in far too many humanistic inquiries of the digital.
Another issue is that some of the chapters seem to be oddly antiquated for a book published in 2016. As we all know, the publication of edited collections can often take longer than anyone would like, but for several chapters, the examples, terminology, and references feel unusually dated. These dated chapters do not necessarily have the advantage of critical distance (in the way that properly historical study does), and neither do they capture the pulse of the current situation—they just feel old.
Before turning to a sample of the truly excellent chapters in this volume, I must pause to make a comment about the book’s physical production. On the back cover, Jussi Parikka calls Ubiquitous Computing, Complexity, and Culture a “massively important volume.” This assessment might have been simplified by just calling it “a massive volume.” Indeed, using some back-of-the-napkin calculations, the 406 dense pages amounts to about 330,000 words. Like cheesecake, sometimes a little bit of something is better than a lot. And, while such a large book might seem like good value, the pragmatics of putting an estimated 330,000 words into a single volume requires considerable care to typesetting and layout, which unfortunately is not the case here. At about 90 characters per line, and 46 lines per page—all set in a single column—the tiny text set on extremely long lines strains even this relatively young reviewer’s eyes and practical comprehension. When trudging through already-dense theory and the obfuscated rhetoric that typically accompanies it (common in this edited collection), the reading experience is often painful. On the positive side, in the middle of the 406 pages of text there are an additional 32 pages of full-color plates, a nice addition and an effective way to highlight the volume’s sympathies in art and media. An extensive index is also included.
Despite my criticisms of the approach of many of the chapters, the book’s typesetting and layout, and the editors’ decision to attempt to collocate so much material in a single volume, there are a number of outstanding chapters, which more than redeem any other weaknesses.
Elaborating on a theme from her 2011 book Programmed Visions (MIT), Wendy H.K. Chun describes why memory, and the ability to forget, is an important aspect to Mark Weiser’s original notion of ubiquitous computing (in his 1991 Scientific American article). (Chun also notes that the word “ubiquitous” comes from “Ubiquitarians,” a Lutherans sect who believed Christ was present ‘everywhere at once’ and therefore invisible.) According to Chun’s reading of Weiser, to get to a state of ubiquitous computing, machines must lose their individualized identity or importance. Therefore, unindividuated computers had to remember, by tracking users, so that users could correspondingly forget (about the technology) and “thus think and live” (161). The long history of computer memory, and its rhetorical emergence out of technical “storage” is an essential aspect to the origins of our current technological landscape. Chun notes that prior to the EDVAC machine (and its strategic alignment to cognitive models of computation), storage was a well understood word, which etymologically suggested an orientation to the future (“stores look toward a future”). Memory, on the other hand, contained within it the act of recall and repetition (recall Meno’s slave in Plato’s dialogue). So, when EDVAC embedded memory within the machine, it changed “memory by making memory storage” (162). In doing so, if we wanted to rehabilitate Weiser’s original image, of being able to “think and live,” we would need to refuse the “deadening of the world brought about by memory as storage and realize the fundamentally collective nature of memory and writing” (162).
Sean Cubitt does an excellent job of exposing the political economy of ubiquitous technologies by focusing on the ways that enclosure and externalization occur in information environments, interrogating the term “information economy.” Cubitt traces the history of enclosures from the alienation of fifteenth-century peasants from their land, the enclosure of skills to produce dead labour in nineteenth-century factories, to the conversion of knowledge into information today, which is subsequently stored in databases and commercialized as intellectual property—alienating individuals from their own knowledge. Accompanying this process are a range of externalizations, predominantly impacting the poor and the indigenous. One of the insightful examples Cubitt offers of this process of externalization is the regulation of radio spectrum in New Zealand, and the subsequent challenge by Maori people who, under the Waitangi Treaty, are entitled to “all forms of commons that pre-existed the European arrival” (218). According to the Maori, radio spectrum is a form of commons, and therefore, the New Zealand government is not permitted to claim exclusive authority to manage the spectrum (as practically all Western governments do). Not content to simply offer critique, Cubitt concludes his chapter with a (very) brief discussion of potential solutions, focusing on the reimagining of peer-to-peer technology by Robert Verzola of the Philippines Green Party. Peer to peer technology, Cubitt tentatively suggests, may help reassert the commons as commonwealth, which might even salvage traditional knowledge from information capitalism.
Katie Ellis and Gerard Goggin discuss the mechanisms of locative technologies for differently-abled people. Ellis and Goggin conclude that devices like the later-model iPhone (not the first release), and the now-maligned Google Glass offer unique value propositions for those engaged in a spectrum of impairment and “complex disability effects” (274). For people who rely on these devices for day-to-day assistance and wayfinding, these devices are ubiquitous in the sense Weiser originally imagined—disappearing from view and becoming integrated into individual lifeworlds.
John Johnston ends the volume as strongly as N. Katherine Hayles’s short foreword opened it, describing the dynamics of “information events” in a world of viral media, big data, and, as he elaborates in an extended example, complex and high-speed financial instruments. Johnston describes how events like the 2010 “Flash Crash,” when the Dow fell nearly a thousand points and lost a trillion dollars and rebounded within five minutes, are essentially uncontrollable and unpredictable. This narrative, Johnston points out, has been detailed before, but Johnston twists the narrative and argues that such a financial system, in its totality, may be “fundamentally resistant to stability and controllability” (389). The reason for this fundamental instability and uncontrollability is that the financial market cannot be understood as a systematic, efficient system of exchange events, which just happens to be problematically coded by high-frequency, automated, and limit-driven technologies today. Rather, the financial market is a “series of different layers of coded flows that are differentiated according to their relative power” (390). By understanding financialization as coded flows, of both power and information, we gain new insight into critical technology that is both ubiquitous and complex.
_____
Quinn DuPont studies the roles of cryptography, cybersecurity, and code in society, and is an active researcher in digital studies, digital humanities, and media studies. He also writes on Bitcoin, cryptocurrencies, and blockchain technologies, and is currently involved in Canadian SCC/ISO blockchain standardization efforts. He has nearly a decade of industry experience as a Senior Information Specialist at IBM, IT consultant, and usability and experience designer.