boundary 2

Tag: economics

  • Richard Hill — “Free” Isn’t Free (Review of Michael Kende, The Flip Side of Free)

    Richard Hill — “Free” Isn’t Free (Review of Michael Kende, The Flip Side of Free)

    a review of Michael Kende, The Flip Side of Free: Understanding the Economics of the Internet (MIT Press, 2021)

    by Richard Hill

    ~

    This book is a must-read for anyone who wishes to engage in meaningful discussions of Internet governance, which will increasingly involve economic issues (17-20). It explains clearly why we don’t have to pay in money for services that are obviously expensive to provide. Indeed, as we all know, we get lots of so-called free services on the Internet: search facilities, social networks, e-mail, etc. But, as the old saying goes “there ain’t no such thing as a free lunch.” It costs money to provide all those Internet services (10), and somebody has to pay for them somehow. In fact, users pay for them, by allowing (often unwittingly: 4, 75, 92, 104, 105) the providers to collect personal data which is then aggregated and used to sell other services (in particular advertising, 69) at a large profit. The book correctly notes that there are both advantages (79) and disadvantages (Chapters 5-8) to the current regime of surveillance capitalism. Had I written a book on the topic, I would have been more critical and would have preferred a subtitle such as “The Triumph of Market Failures in Neo-Liberal Regimes.”

    Michael Kende is a Senior Fellow and Visiting Lecturer at the Graduate Institute of International and Development Studies, Geneva, a Senior Adviser at Analysis Mason, a Digital Development Specialist at the World Bank Group, and former Chief Economist of the Internet Society. He has worked as an academic economist at INSEAD as a US regulator at the Federal Communications Commission. In this clearly written and well researched book, he explains, in laymen’s terms, the seeming paradox of “free” services that nevertheless yield big profits.

    The secret is to exploit the monetary value of something that had some, but not much, value until a bit over twenty years ago: data (63). The value of data is now so large that the companies that exploit it are the most valuable companies in the world, worth more than old giants such as producers of automobiles or petroleum. In fact data is so central to today’s economy that, as the author puts it (143): “It is possible that a new metric is needed to measure market power, especially when services are offered for free. Where normally a profitable increase in price was a strong metric, the new metric may be the ability to profitably gather data – and monetize it through advertising – without losing market share.” To my knowledge, this is an original idea, and it should be taken seriously by anyone interested in the future evolution of, not just the Internet, but society in general (for the importance of data, see for example the annex of this paper, and also here).

    The core value of this book lies in Chapters 5 through 10, which provide economic explanations – in easy-to-understand lay language – of the current state of affairs. They cover the essential elements: the importance of data, and why a few companies have dominant positions. Readers looking for somewhat more technical economic explanations may consider reading this handbook and readers looking for the history of the geo-economic policies that resulted in the current state of affairs can read the books reviewed here and here.

    Chapter 5 of the book explains why most of us trade off the privacy of our data in exchange for “free” services: the benefits may outweigh the risks (88), we may underestimate the risks (89), and we may not actually know the risks (91, 92, 105). As the author correctly notes (99-105), there likely are market failures that should be corrected by government action, such as data privacy laws. The author mentions the European Union GDPR (100); I think that it is also worth mentioning the less known, but more widely adopted, Council of Europe Convention (108). And I would have preferred an even more robust criticism of jurisdictions that allow data brokers to operate secretively (104).

    Chapter 6 explains how market failures have resulted in inadequate security in today’s Internet. In particular users cannot know if a product has an adequate level of security (information asymmetry) and one user’s lack of security may not affect him or her, but may affect others (negative externalities). As the author says, there is a need to develop security standards (e.g. devices should not ship with default administrator passwords) and to impose liability for companies that market insecure products (120, 186).

    Chapter 7 explains well the economic concepts of economies of scale and network effect (see also 23), how they apply to the Internet, and why (122-129) they facilitated the emergence of the current dominant platforms (such as Amazon, Facebook, Google, and their Chinese equivalents). This results in a winner-takes-all situation: the best company becomes the only significant player (133-137). At present, competition policy (140-142) has not dealt with this issue satisfactorily and innovative approaches that recognize the central role and value of data may be needed. I would have appreciated an economic discussion of how much (or at least some) of the gig economy is not based on actual innovation (122), but on violating labor laws or housing and consumer protection laws. I would also have expected a more extensive discussion of two-sided markets (135): while the topic is technical, I believe that the author has the skills to explain it clearly for laypeople. It is a pity that the author didn’t explore, at least briefly, the economic issues relating to the lack of standardization, and interoperability, of key widely used services, such as teleconferencing: nobody would accept having to learn to use a plethora of systems in order to make telephone calls; why do we accept that for video calls?

    The chapter correctly notes that data is the key (143-145) and notes that data sharing (145-147, 187, 197) may help to reintroduce competition. While it is true that data is in principle non-rivalrous (194), in practice at present it is hoarded and treated as private property by those who collect it. It would have been nice if the author had explored methods for ensuring the equitable distribution of the value added of data, but that would no doubt have required an extensive discussion of equity. It is a pity that the author didn’t discuss the economic implications, and possible justification, of providing certain base services (e.g. e-mail, search) as public services: after all, if physical mail is a public service, why shouldn’t e-mail also be a public service?

    Chapter 8 documents the digital divide: access to Internet is much less affordable, and widespread, in developing countries than it is in developed countries. As the author points out, this is not a desirable situation, and he outlines solutions (including infrastructure sharing and universal service funds (157)), as have others (for example here, here, here, and here). It would have been nice if the author had explored how peering (48) may disadvantage developing countries (in particular because much of their content is hosted abroad (60, 162)); and evaluated the economics of relying on large (and hence efficient and low-cost) data centers in hubs as opposed to local hosting (which has lower transmission costs but higher operating costs); but perhaps those topics would have strayed from the main theme of the book. The author correctly identifies the lack of payment systems as a significant hindrance to greater adoption of the e-commerce in developing countries (164); and, of course, the relative disadvantage with respect to data of companies in developing countries (170, 195).

    Chapter 9 explains why security and trust on the Internet must be improved, and correctly notes that increasing privacy will not necessarily increase trust (183). The Chapter reiterates some of the points outlined above, and rightly concludes: “There is good reason to raise the issue [of lack of trust] when seeing the market failures taking place today with cybersecurity, sometimes based on the most easily avoidable mistakes, and the lack of efforts to fix them. If we cannot protect ourselves today, what about tomorrow?” (189)

    Chapter 10 correctly argues that change is needed, and outlines the key points: “data is the basis for market power; lack of data is the hidden danger of the digital divide; and data will train the algorithms of the future AI” (192). Even when things go virtual, there is a role for governments: “who but governments could address market power and privacy violations and respond to state-sponsored attacks against their citizens or institutions?” (193) Data governance will be a key topic for the future: “how to leverage the unique features of data and avoid the costs: how to generate positive good while protecting privacy and security for personal data; how to maintain appropriate property rights to reward innovation and investment while checking market power; how to enable machine learning while allowing new companies strong on innovation and short on data to flourish; how to ensure that the digital divide is not replaced by a data divide.” (195)

    Chapters 1 through 4 purport to explain how certain technical features of the Internet condition its economics. The chapters will undoubtedly be useful for people who don’t have much knowledge of telecommunication and computer networks, but they are unfortunately grounded in an Internet-centric view that does not, in my view, accord sufficient weight to the long history of telecommunications, and, consequently, considers as inevitable things that were actually design choices. It is important to recall that the Internet was originally designed as a national (US) non-public military and research network (27-28). As such, it originally provided only for 7-bit ASCII character sets (thus excluding character with accents), it did not provide for usage-based billing, and it assumed that end-to-end encryption could be used to provide adequate security (108). It was not designed to allow insecure end-user devices (such as personal computers) to interconnect on a global scale.

    The Internet was originally funded by governments, so when it was privatized, some method of funding other than conventional usage charges had to be invented (such as receiver pays (53)– and advertising). It is correct (39, 44) that differences in pricing are due to differences in technology, but only because the Internet technologies were not designed to facilitate consumption/volume-based pricing. I would have expected an economics-based discussion of how this makes it difficult to optimize networks, which always have choke points (54-55). For example, I am connected by DSL, and I pay for a set bandwidth, which is restricted by my ISP. While the fiber can carry higher bandwidth (I just have to pay more for it), at any given time (as the author correctly notes) my actual bandwidth depends on what my neighbours that share the same multiplexor are doing. If one of my neighbours is streaming full-HD movies all day long, my performance will degrade, yet they may or may not be paying the same price as me (55). This is not economically efficient. Thus, contrary to what the author posits (46), best-effort packet switching (the Internet model) is not always more efficient than circuit-switching: if guaranteed quality of service is needed, circuit-switching can be more efficient that paying for more bandwidth, even if, in case of overload, service is denied rather than being “merely” degraded (those of us who have had to abandon an Internet teleconference because of poor quality will appreciate that degradation can equal service denial; and musicians who have tried to perform virtually doing the pandemic would have appreciated a guaranteed quality of service that would have ensured synchronization between performers and between video and sound).

    As the author correctly notes, (59) some form of charging is necessary when resources are scarce; and (42, 46, 61) it is important to allocate scarcity efficiently. It’s a pity that the author didn’t explore the economics of usage-based billing, and dedicated circuits, as methods for the efficient allocation of scarcity (again, in the end there is always a scarce resource somewhere in the system). And it’s a pity that he didn’t dig into the details of the economic factors that result in video traffic being about 70% of all traffic (159): is that due to commercial video-on-demand services (such as Netflix), or to user file sharing (such as YouTube) or to free pornography (such as PornHub)? In addition, I would have appreciated a discussion of the implications of the receiver-pays model, considering that receivers pay not only for the content they requested (e.g. Wikipedia pages), but also for content that they don’t want (e.g. spam) or didn’t explicitly request (e.g. adversiting).

    The mention in passing of the effects of Internet on democracy (6) fails to recognize the very deleterious indirect effects resulting from the decline of traditional media. Contrary to what the book implies (7, 132) breaking companies up would not necessarily be deleterious, and making platforms responsible for content would not necessarily stifle innovation., even if such measures could have downsides.

    It is true (8) that anything can be connected to the Internet (albeit with a bit more configuration than the book implies), but it is also true that this facilitates phishing, malware attacks, spoofing, abuse of social networks, and so forth.

    Contrary to what the author implies (22), ICT standards have always been free to use (with some exceptions relating to intellectual property rights; further, the exceptions allowed by IETF are the same as those allowed by ITU and most other standards-making bodies (34)). Core Internet standards have always been free to access online, whereas that was not the case in the past for telecommunications standards; however, that has changed, and ITU telecommunications standards are also freely available online. While it is correct (24) that access to traditional telecommunication networks was tightly controlled, and that early data networks were proprietary, traditional telecommunications networks and later data networks were based on publicly-available standards. While it is correct (31) that anybody can contribute to Internet standards-making, in practice the discussions are dominated by people who are employed by companies that have a vested interest in the standards (see for example pp. 149-152 of the book reviewed here, and Chapters 5 and 6 of the book reviewed here); further, W3C (32) and IEEE (33) are a membership organization, as are the more traditional standardization bodies. While users of standards (in particular manufacturers) have a role in making Internet standards, that is the case for most standard-making; end-users do not have a role in making Internet standards (32). Regarding standards (33), the author fails to mention the key role of ITU-R with respect to the availability of WiFi spectrum and of ITU-T with respect to xDSL (51) and compression.

    The OSI Model (26) was a joint effort of CCITT/ITU, IEC, and ISO. Contrary to what the author implies (29), e-mail existed in some form long before the Internet, albeit as proprietary systems, and there were other efforts to standardize e-mail; it is a pity that the author didn’t provide an economic analysis of why SMTP prevailed over more secure e-mail protocols, and how its lack of billing features facilitates spam (I have been told that the “simple” in SMTP refers to absence of the security and billing features that encumbered other e-mail protocols).

    While much of the Internet is decentralized (30), so is much of the current telephone system. On the other hand, Internet’s naming and addressing is far more centralized than that of telephony.

    However, these criticisms of specific bits of Chapters 1 through 4 do not in any way detract from the value of the rest of the book which, as already mentioned, should be required reading for anyone who wishes to engage in discussions of Internet-related matters.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Richard Hill – Review of Bauer and Latzer, Handbook on the Economics of the Internet

    Richard Hill – Review of Bauer and Latzer, Handbook on the Economics of the Internet

    a review of Johannes M. Bauer and Michal Latzer, eds., Handbook on the Economics of the Internet (Edward Elgar, 2016)

    by Richard Hill

    ~

    The editors of this book must be commended for having undertaken the task of producing it: it must surely have taken tremendous persistence and patience to assemble the broad range of chapters.  The result is a valuable book is valuable, even if at some parts are disappointing.  As is often the case for a compilation of articles written by different authors, the quality of the individual contributions is uneven: some are excellent, others not.  The book is valuable because it identifies many of the key issues regarding the economics of the Internet, but it is somewhat disappointing because some of the topics are not covered in sufficient depth and because some key topics are not covered at all.  For example, the digital divide is mentioned cursorily on pp. 6-7 of the hardback edition and there is no discussion of its historical origins, economic causes, future evolution, etc.

    Yet there is extensive literature on the digital divide, such as easily available overall ITU reports from 2016 and 2017, or more detailed ITU regional studies regarding international Internet interconnectivity for Africa and Latin America.  The historical impact of the abolition of the traditional telephony account settlement scheme is covered summarily in Chapter 2 of my book The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (2013).  One might have expected that a book dedicated to the economics of the Internet would have started from that event and explained its consequences, and analyse proposals regarding how to address the digital divide, for example the proposals made during the World Summit on the Information Society to create some kind of fund to bridge the gap (those proposals were not accepted).  I would have expected such a book to discuss the possibilities and the ramifications of an international version of the universal service funds that are used in many countries to minimize national digital divides between low-density rural areas and high-density cities.  But there is no discussion at all of these topics in the book.

    And there is little discussion of Artificial Intelligence (some of which is enabled by data obtained through the Internet) or of the disruption of labour markets that some believe is or will be caused by the Internet.  For a summary treatment of these topics, with extensive references, see sections 1 and 8 of my submission to the Working Group on Enhanced Cooperation.

    The Introduction of the book correctly notes that “Scale economies, interdependencies, and abundance are pervasive [in the Internet] and call for analytical concepts that augment the traditional approaches” (p. 3).  Yet, the book fails, on the whole, to deliver sufficient detail regarding such analytical concepts, an exception being the excellent discussion on pp. 297-308 of the Internet’s economic environment for innovation, in particular pp. 301-303.

    Of the 569 pages of text (in the hardcover edition), only 22 or so contain quantitative charts or tables (eight are in one chapter), and of those only 12 or so are original research.  Only one page has equations.  Of course the paucity of data in the book is due to the fact that data regarding the Internet is hard to obtain: in today’s privatized environment, companies strive to collect data, but not to publish it.  But economics is supposed to be a quantitative discipline, at least in part, so it would have been valuable if the book had included a chapter on the reasons for the relative paucity of reliable data (both micro and macro) concerning the Internet and the myriad of transactions that take place on the Internet.

    In a nutshell, the book gives good overall, comprehensive, and legible, descriptions of many trees, but in some cases without sufficient quantitative detail, whereas it mostly fails to provide an analysis of the forest comprised by the trees (except for the brilliant chapter by Eli Noam titled “From the Internet of Science to the Internet of Entertainment”).

    The book will be very valuable for people who know little or nothing about the Internet and its economics.  Those who know something will benefit from the extensive references given at the end of each chapter.  Those who know specific topics well will not learn much from this book.  A more appropriate title for the book would have been “A Comprehensive Introduction to the Economics of the Internet”.

    The rest of this review consists of brief reviews of each of the chapters of the book.  We start with the strongest chapter, followed by the weakest chapter, then review the other chapters in the order in which they appear in the book.

    1. From the Internet of Science to the Internet of Entertainment

    This chapter is truly excellent, as one would expect, given that it is written by Eli Noam.  It captures succinctly the key policy questions regarding the economics of the Internet.  We cite p. 564:

    • How to assure the financial viability of infrastructure?
    • Market power in the entertainment Internet?
    • Does vertical integration impede competition?
    • How to protect children, old people, and traditional morality?
    • How to protect privacy and security?
    • What is the impact on trade? What is the impact of globalization?
    • How to assure the interoperability of clouds?

    It is a pity that the book did not use those questions as key themes to be addressed in each chapter.  And it is a pity that the book did not address the industrial economics issues so well put forward.  We cite p. 565:

    Another economic research question is how to assure the financial viability of the infrastructure.  The financial balance between infrastructure, services, and users is a critical issue.  The infrastructure is expensive and wants to be paid.  Some of the media services are young and want to be left to grow.  Users want to be served generously with free content and low-priced, flat-rate data service.  Fundamental economics of competition push towards price deflation, but market power, and maybe regulation, pull in another direction.  Developing countries want to see money from communications as they did in the days of traditional telecom.

    Surely the other chapters of the book could have addressed these issues, which are being discussed publicly, see for example section 4 of the Summary of the 2017 ITU Open Consultation on so-called Over-the-Top (OTT) services.

    Noam’s discussion of the forces that are leading to fragmentation (pp. 558-560) is excellent.  He does not cite Mueller’s recent book on the topic, no doubt because this chapter of the book was written before Mueller’s book was published.  Muller’s book focuses on state actions, whereas Noam gives a convincing account of the economic drivers of fragmentation, and how such increased diversity may not actually be a negative development.

    Some minor quibbles: Noam does not discuss the economic impact of adult entertainment, yet it is no doubt significant.  The off-hand remark at the bottom of p. 557 to the effect that unleashing demand for entertainment might solve the digital divide is likely not well taken, and in any case would have to be justified by much more data.

    1. The Economics of Internet Standards

    I found this to be the weakest chapter in the book.  To begin with, it is mostly descriptive and contains hardly any real economic analysis.  The account of the Cisco/Huawei battle over MPLS-TP standards (pp. 219-222) is accurate, but it would have been nice to know what the economic drivers were of that battle, e.g. size of the market, respective market shares, values of the respective products based on the respective standards, who stood to gain/lose what (and not just the manufacturers, but also the network operators), etc.

    But the descriptive part is also weak.  For example, the Introduction gives the misleading impression that IETF standards are the dominant element in the growth of the Internet, whereas it was the World Wide Web Consortium’s (W3C) HTML and successor standards that enabled the web and most of what we consider to be the Internet today.  The history on p. 213 omits contributions from other projects such as Open Systems Interconnection (OSI) and CYCLADES.

    Since the book is about economics, surely it should have mentioned on pp. 214 and 217 how the IETF has become increasingly influenced by dominant manufacturers, see pp. 148-152 of Powers, Shawn M., and Jablonski, Michael (2015) The Real Cyberwar: The Political Economy of Internet Freedom; as Noam puts the matter on p. 559 of the book: “The [Internet] technical specifications are set by the Steering Group of the Internet Engineering Task Force (IETF), a small group of 15 engineers, almost all employees of big companies around the world.”

    And surely it should have discussed in section 10.4 (p. 214) the economic reasons that lead to greater adoption of TCP/IP over the competing OSI protocol, such as the lower implementation costs due to the lack of security of TCP/IP, the lack of non-ASCII support in the early IETF protocols, and the heavy subsidies provided by the US Defence Projects Research Agency (DARPA) and by the US National Science Foundation (NSF), which are well known facts recounted on pp. 533-541 of the book.  In addition to not dealing with economic issues, section 10.4 is an overly simplified account of what really happened.

    Section 10.7 (p. 222) is again, surprisingly devoid of any semblance of economic analysis.  Further, it perpetuates a self-serving, one-sided account of the 2012 World Conference on International Telecommunications (WCIT), without once citing scholarly writings on the issue, such as my book The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (2013).  The authors go so far as to cite the absurd US House proposition to the effect that the Internet should be “free of government control” without noting that what the US politicians meant is that it should be “free of foreign government control”, because of course the US has never had any intent of not subjecting the Internet to US laws and regulations.

    Indeed, at present, hardly anybody seriously questions the principle that offline law applies equally online.  One would expect a scholarly work to do better than to cite inane political slogans meant for domestic political purposes.  In particular when the citations are not used to underpin any semblance of economic analysis.

    1. The Economics of the Internet: An Overview

    This chapter provides a solid and thorough introduction to the basics of the economics of the Internet.

    1. The Industrial Organization of the Internet

    This chapter well presents the industrial organization of the Internet, that is, how the industry is structured economically, how its components interact economically, and how that is different from other economic sectors.  As the authors correctly state (p. 24): “ … the tight combination of high fixed and low incremental cost, the pervasive presence of increasing returns, the rapidity and frequency of entry and exit, high rates of innovation, and economies of scale in consumption (positive network externalities) have created unique economic conditions …”.  The chapter explains well key features such as multi-sided markets (p. 31).  And it correctly points out (p. 25) that “while there is considerable evidence that technologically dynamic industries flourish in the absence of government intervention, there is also evidence of the complementarity of public policy and the performance of high-tech markets.”  That is explored in pp. 45 ff. and in subsequent chapters, albeit not always in great detail.

    1. The Internet as a Complex Layered System

    This is an excellent chapter, one of the best in the book.  It explains how, because of the layered nature of the Internet, simple economic theories fail to capture its complexities.  As the chapter says (p. 68), the Internet is best viewed as a general purpose infrastructure.

    1. A Network Science Approach to the Internet

    This chapter provides a sound and comprehensive description of the Internet as a network, but it does not go beyond the description to provide analyses, for example regarding regulatory issues.  However, the numerous citations in the chapter do provide such analyses.

    1. Peer Production and Cooperation

    This chapter is also one of the best chapters in the book.  It provides an excellent description of how value is produced on the Internet, through decentralization, diverse motivations, and separation of governance and management.  It covers, and explains the differences between, peer production, crowd-sourcing, collaborative innovation, etc.  On p. 87 it provides an excellent quantitative description and analysis of specific key industry segments.  The key governance patterns in peer production are very well summarized on pp. 108-109 and 112-113.

    1. The Internet and Productivity

    This chapter actually contains a significant amount of quantitative data (which is not the case for most of the other chapters) and provides what I would consider to be an economic analysis of the issue, namely whether, and if so how, the Internet has contributed to productivity.  As the chapter points out, we lack sufficient data to analyse fully the impacts of the development of information and communication technologies since 2000, but this chapter does make an excellent contribution to that analysis.

    1. Cultural Economics and the Internet

    This is a good introduction to supply, demand, and markets for creative goods and services produced and/or distributed via the Internet.  The discussion of two-sided markets on p. 155 is excellent.  Unfortunately, however, the chapter is mostly a theoretical description: it does not refer to any actual data or provide any quantitative analysis of what is actually happening.

    1. A Political Economy Approach to the Internet

    This is another excellent chapters, one of the best in the book.  I noted one missing citation to a previous analysis of key issues from the political economics point of view: Powers, Shawn M., and Jablonski, Michael (2015) The Real Cyberwar: The Political Economy of Internet Freedom.  But the key issues are well discussed in the chapter:

    • The general trend towards monopolies and oligopolies of corporate ownership and control affecting the full range of Internet use and development (p. 164).
    • The specific role of Western countries and their militaries in supporting and directing specific trajectories (p. 165).
    • How the general trend towards privatization made it difficult to develop the Internet as a public information utility (p. 169).
    • The impact on labour, in particular shifting work to users (p. 170).
    • The rise and dominance of the surveillance economy (where users become the product because their data is valuable) (p. 175).
    1. Competition and Anti-Trust in Internet Markets

    This chapter provides a very good overview of the competition and anti-trust issues related to the Internet, but it would have been improved if it had referred to the excellent discussion in Noam’s chapter “From the Internet of Science to the Internet of Entertainment.”  It would have been improved by referring to recent academic literature on the topic.  Nevertheless, the description of key online market characteristics, including that they are often two-sided, (p. 184) is excellent.  The description of the actual situation (including litigation) regarding search engines on p. 189 ff. is masterful: a superb example of the sort of real economic analysis that I would have liked to see in other chapters.

    The good discussion of network neutrality (p. 201) could have been improved by taking the next step and analysing the economic implications of considering whether the Internet infrastructure should be regulated as a public infrastructure and/or, for example, be subject to functional separation.

    1. The Economics of Copyright and the Internet

    This is an excellent introduction to the issues relating to copyright in the digital age.  It provides little data but that is because, as noted on pp. 238-241, there is a paucity of data for copyright, whereas there is more for patents.

    1. The Economics of Privacy, Data Protection and Surveillance

    As one would expect from its author, Ian Brown, this is an excellent discussion of the issues and, again, one of the best chapters in the book.  In particular, the chapter explains well and clearly (pp. 250 ff.) why market failures (e.g externalities, information asymmetries and anti-competitive market structures) might justify regulation (such as the European data privacy rules).

    1. Economics of Cybersecurity

    This chapter provides a very good overview of the economic issues related to cybersecurity, but, like most of the other chapters, it provides very little data and thus no detailed economic analysis.  It would have benefited from referring to the Internet Society’s 2016 Global Internet Report, which does provide data, and stresses the key market failures that result in the current lack of security of the Internet: information asymmetries (section 13.7.2 of the book) and externalities (section 13.7.3).

    However, the section on externalities fails to mention certain possible solutions, such as minimum security standards.  Minimum safety standards are imposed on many products, such as electrical appliances, automobiles, airplanes, pharmaceuticals, etc.  Thus it would have been appropriate for the book to discuss the economic implications of minimum security standards.  And also the economic implications of Microsoft’s recent call for a so-called Geneva Digital Convention.

    1. Internet Architecture and Innovation in Applications

    This chapter provides a very good description, but it suffers from considering the Internet in isolation, without comparing it to other networks, in particular the fixed and mobile telephone networks.  It would have been good to see a discussion and comparison of the economic drivers of innovation or lack of innovation in the two networks.  And also a discussion of the economic role of the telephony signalling network, Signalling System Seven (SS7) which enabled implementation of the widely used, and economically important, Short Messaging Service (SMS).

    In that context, it is important to note that SS7 is, as is the Internet, a connectionless packet-switched system.  So what distinguishes the two networks is more than technology: indeed, economic factors (such as how services are priced for end-users, interconnection regimes, etc.) surely play a role, and it would have been good if those had been explored.  In this context, see my paper “The Internet, its governance, and the multi-Stakeholder model”, Info, vol. 16. no. 2, March 2014.

    1. Organizational Innovations, ICTs and Knowledge Governance: The Case of Platforms

    As this excellent chapter, one of the best in the books, correctly notes, “platforms constitute a major organizational innovation” which has been “made possible by technological innovation”.

    As explained on pp. 338-339, platforms are one of the key components of the Internet economy, and this has recently been recognized by governments.  For example, the Legal Affairs Committee of the European Parliament adopted an Opinion in May 2017 that, among other provisions:

    Calls for an appropriate and proportionate regulatory framework that would guarantee responsibility, fairness, trust and transparency in platforms’ processes in order to avoid discrimination and arbitrariness towards business partners, consumers, users and workers in relation to, inter alia, access to the service, appropriate and fair referencing, search results, or the functioning of relevant application programming interfaces, on the basis of interoperability and compliance principles applicable to platforms.

    The topic is covered to some extent a European Parliament Committee Report on online platforms and the digital single market, (2016/2276(INI).  And by some provisions in French law.  Detailed references to the cited documents, and to other material relevant to platforms, are found in section 9 of my submission to the Working Group on Enhanced Cooperation.

    1. Interconnection in the Internet: Peering, Interoperability and Content Delivery

    This chapter provides a very good description of Internet interconnection, including a good discussion of the basic economic issues.  As do the other chapters, it suffers from a paucity of data, and does not discuss whether the current interconnection regime is working well, or whether it is facing economic issues.  The chapter does point out (p. 357) that “information about actual interconnection agreements … may help to understand how interconnection markets are changing …”, but fails to discuss how the unique barter structure of Internet interconnections, most of which are informal, zero-cost traffic sharing agreements, impedes the collection and publication of such information.

    The discussion on p. 346 would have benefited from an economic analysis of the advantages/disadvantages of considering the basic Internet infrastructure to be a basic public infrastructure (such as roads, water and electrical power distribution systems, etc.) and the economic tradeoffs of regulating its interconnection.

    Section 16.5.1 would have benefited from a discussion of the economic drivers behind the discussions in ITU that lead to the adoption of ITU-T Recommendation D.50 and its Supplements, and the economic issues arguing for and against implementation of the provisions of that Recommendation.

    1. Internet Business Strategies

    As this very good chapter explains, the Internet has had a dramatic impact on all types of businesses, and has given rise to “platformization”, that is the use of platforms (see chapter 15 above) to conduct business.  Platforms benefit from network externalities and enable two-sided markets.  The chapter includes a detailed analysis (pp. 370-372) of the strategic properties of the Internet that can be used to facilitate and transform business, such as scalability, ubiquity, externalities, etc.  It also notes that the Internet has changed the role of customers and both reduced and increased information asymmetries.  The chapter provides a very good taxonomy of Internet business models (pp. 372 ff.).

    1. The Economics of Internet Search

    The chapter contains a good history of search engines, and an excellent analysis of advertising linked to searches.  It provides theoretical models and explains the important of two-sided markets in this context.  As the chapter correctly notes, additional research will require access to more data than are currently available.

    1. The Economics of Algorithmic Selection on the Internet

    As this chapter correctly notes (p. 395), “algorithms have come to shape our daily lives and realities.”  They have significant economic implication and raise “significant social risks such as manipulation and data bias, threats to privacy and violations of intellectual property rights”.  A good description of different types of algorithms and how they are used is given on p. 399.  Scale effects and concentration are discussed (p. 408) and the social risks are explained in detail on pp. 411 ff.:

    • Threats to basic rights and liberties.
    • Impacts on the mediation of reality.
    • Challenges to the future development of the human species.

    More specifically:

    • Manipulation
    • Diminishing variety
    • Constraints on freedom of expression
    • Threats to data protection and privacy
    • Social discrimination
    • Violation of intellectual property rights
    • Possible adaptations of the human brain
    • Uncertain effects on humans

    In this context, see also the numerous references in section 1 of my submission to the Working Group on Enhanced Cooperation.

    The chapter includes a good discussion of different governance models and their advantages/disadvantages, namely:

    • Laissez-fair markets
    • Self-organization by business
    • Self-regulation by industry
    • State regulation
    1. Online Advertising Economics

    This chapter provides a good history of what some have referred to as the Internet’s original sin, namely the advent of online advertising as the main revenue source for many Internet businesses.  It explains how the Internet can, and does, improve the efficiency of advertising by targeting (pp. 430 ff.) and it includes a detailed analysis of advertising in relation to search engines (pp. 435 ff.).

    1. Online News

    As the chapter correctly notes, this is an evolving area, so the chapter mostly consists of a narrative history.  The chapter’s conclusion starts by saying that “the Internet has brought growth and dynamism to the news industry”, but goes on to note, correctly, that “the financial outlook for news providers, old or new, is bleak” and that, thus far, nobody has found a viable business model to fund the online news business.  It is a pity that this chapter does not cite McChesney’s detailed analysis of this issue and discuss his suggestions for addressing it.

    1. The Economics of Online Video Entertainment

    This chapter provides the history of that segment of the Internet industry and includes a valuable comparison and analysis of the differences between online and offline entertainment media (pp. 462-464).

    1. Business Strategies and Revenue Models for Converged Video Services

    This chapter provides a clear and comprehensive description of how an effect of convergence “is the blurring of lines between formerly separated media platforms such as over-the-air broadcasting, cable TV, and streamed media.”  The chapter describes ten strategies and six revenue models that have been used to cope with these changes.

    1. The Economics of Virtual Worlds

    This chapter provides a good historical account of the evolution of the internal reward system of games, which went from virtual objects that players could obtain by solving puzzles (or whatever) to virtual money that could be acquired only within the game, to virtual money that could be acquired with real-world money, to large professional factories that produce and sell objects to World of Wonders players in exchange for real-world money.  The chapter explores the legal and economic issues arising out of these situations (pp. 503-504) and gives a good overview of the research in virtual economies.

    1. Economics of Big Data

    This chapter correctly notes (p. 512) that big data is “a field with more questions than answers”.  Thus, logically, the chapter is mostly descriptive.  It includes a good account of two-sided markets (p. 519), and correctly notes (p. 521) that “data governance should not be construed merely as an economic matter but that it should also encompass a social perspective”, a position with which I wholeheartedly agree.  As the chapter says (p. 522), “there are some areas affected by big data where public policies and regulations do exist”, in particular regarding:

    • Privacy
    • Data ownership
    • Open data

    As the chapter says (p. 522), most evidence available today suggests that markets are not “responding rapidly to concerns of users about the (mis)use of their personal information”.  For additional discussion, with extensive references, see section 1 of my submission to the Working Group on Enhanced Cooperation.

    1. The Evolution of the Internet: A Socioeconomic Account

    This is a very weak chapter.  Its opening paragraph fails to consider the historical context of the development of the Internet, or its consequences.  Its second paragraph fails to consider the overt influence of the US government on the evolution of the Internet.  Section 26.3 fails to cite one of the most comprehensive works on the topic (the relation between AT&T and the development of the internet), namely Schiller, Dan (2014) Digital Depression: Information Technology and Information Crisis, University of Illinois Press.  The discussion on p. 536 fails to even mention the Open Systems Interconnection (OSI) initiative, yet that initiative undoubtedly affected the development of the Internet, not just by providing a model for how not to do things (too complex, too slow), but also by providing some basic technology that is still used to this day, such as X.509 certificates.

    Section 26.6, on how market forces affect the Internet, seems oblivious to the rising evidence that dominant market power, not competition, is shaping the future of the Internet, which appears surprising in light of the good chapter in the book on that very topic: “Competition and anti-trust in Internet markets.”  Page 547 appears to ignore the rising vertical integration of many Internet services, even though that trend is well discussed in Noam’s excellent chapter “From the Internet of Science to the Internet of Entertainment.”

    The discussion of the role of government on p. 548 is surprisingly lacunary, given the rich literature on the topic in general, and specific government actions or proposed actions regarding topics such as freedom of speech, privacy, data protection, encryption, security, etc. (see for example my submission to the Working Group on Enhanced Cooperation).

    This chapter should have started with the observation that the Internet was not conceived as a public network (p. 558) and build on that observation, explaining the socioeconomic factors that shaped its transformation from a closed military/academic network into a public network and into a basic infrastructure that now underpins most economic activities.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Richard Hill — The Root Causes of Internet Fragmentation

    Richard Hill — The Root Causes of Internet Fragmentation


    a review of Scott Malcomson, Splinternet: How Geopolitics and Commerce Are Fragmenting the World Wide Web
      (OR Books, 2016)
    by Richard Hill
    ~

    The implicit premise of this valuable book is that “we study the past to understand the present; we understand the present to guide the future.” In that light, the book makes a valuable contribution by offering a sound and detailed historical survey of aspects of the Internet which are not well-known nor easily accessible outside the realms of dedicated internet research. However, as explained below, the author has not covered some important aspects of the past and thus the work is incomplete as a guide to the future. This should not be taken as criticism, but as a call for the author, or other scholars, to complete the work.

    The book starts by describing how modern computers and computer networks evolved from the industrialization of war and in particular due to the advantages that could be gained by automating the complex mathematical calculations required for ballistics on the one hand (computers) and by speeding up communications between elements of armed forces on the other hand (networks). Given the effectiveness of ICTs for war, belligerents before, during, and after World War II heavily funded research and development of those technologies in the military context, even if much of the research was outsourced to the private sector.

    Malcomson documents how the early founders of what we now call computer science were based in the USA and were closely associated with US military efforts: “the development of digital computing was principally an unintended byproduct of efforts to improve the accuracy of gunfire against moving targets” (49).

    Chapter 1 ends with an account of how Cold War military concerns (especially so-called mutual assured destruction by nuclear weapons) led to the development of packet switched networks in order to interconnect powerful computers: ARPANET, which evolved to become the Internet.

    Chapter 2 explores a different, but equally important, facet of Internet history: the influence of the anti-authoritarian hacker culture, which started with early computer enthusiasts, and fully developed in the 1970s and 1980s, in particular in the West Coast (most famously documented in Steven Levy’s 1984 book Hackers: Heroes of the Computer Revolution). The book explains the origins of the venture capitalism that largely drove the development of ICTs (including the Internet) as private risk capital replaced state funding for research and development in ICTs.

    The book documents the development of the geek culture’s view that computers and networks should be “an instrument of personal liberation and create a frictionless, alternative world free from the oppressing state” (101). Malcomson explains how this led to the belief that the Internet should not be subject to normal laws, culminating in Barlow’s well known utopian “Declaration of the Independence of Cyberspace,” and explains how such ideas could not, and did not survive. The chapter concludes: “The subculture had lost the battle. Governments and large corporations would now shape the Internet” (137). But, as the book notes later (171), it was in fact primarily one government, the US government, that shaped the Internet. And, as Shawn Powers and Michael Jablonski explain in The Real Cyberwar, the US used its influence to further its own geopolitical and global economic goals.

    Chapter 3 explores the effects of globalization, the weakening of American power, the rise of competing powers, and the resulting tensions regarding US dominance of ICTs in general and the Internet in particular. It also covers the rise of policing of the Internet induced by fear of “terrorists, pedophiles, drug dealers, and money launderers” (153).

    We have come full circle: a technology initially designed for war is now once again used by the military to achieve its aims, the so-called “war on terror.” So there is a tension between three different forces, all of which were fundamental to the development of ICTs (including the Internet): the government, military, and security apparatus; more-or-less anarchic technologists; and dominant for-profit companies (which may have started small, but can quickly become very large and dominant – at least for a few years until they are displaced by newcomers).

    As the subtitle indicates, the book is mostly about the World Wide Web, so some of the other aspects of the history of the Internet are not covered. For example, there is no mention of the very significant commercial and political battles that took place between proponents of the Internet and proponents of the Open Systems Interconnection (OSI) suite of standards; this is a pity, because the residual effects of those battles are still being felt today. Nor does the book explore the reasons for and effects of the transition of the management of the Internet from the US Department of Defense to the US Department of Commerce (even if it correctly notes that the chief interest of the Clinton administration “was in a thriving Internet that would lead to new industries and economic growth” [133]).

    Malcomson explains well how there were four groups competing for influence in the late 1990s: technologists, the private sector, the US government, and other governments, and notes how the US government was in an impossible situation, since it could not credibly argue simultaneously that other governments (or intergovernmental organizations such as the ITU) should not influence the Internet while it itself formally supervised the management and administration of the domain name system (DNS). However, he does not explain how the origins of the DNS, its subsequent development, or how its management and administration were unilaterally hijacked by the US, leading to much of the international tension that has bedeviled discussions on Internet governance since 1998.

    Regarding the World Wide Web, the book does not discuss how the end-to-end principle and its premise of secure end devices resulted in unforeseen consequences (such as spam, cybercrime, and cyberattacks) when unsecure personal computers became the dominant device connected via the Internet. Nor does it discuss how the lack of billing mechanisms in the Internet protocol suite has led to the rise of advertising as the sole revenue generation mechanism and the consequences of that development.

    The book analyses the splintering (elsewhere called fragmentation) brought about by the widespread adoption of proprietary systems operating system and their associated “apps”, and by mass surveillance. As Malcomson puts the matter, mass surveillance “was fatal to the universality of the web, because major web companies were and are global but cannot be both global and subject to the intricate agendas of US intelligence and defense institutions, whose purpose is to defend national interests, not universal interests” (160).

    However, the book does not discuss in any depth other sources of splintering, such as calls by some governments for national control over some portions of the Internet, or violations of network neutrality, or zero rating. Yet the book notes that the topic of network neutrality had been raised by Vice President Gore as early as 1993: “Without provisions for open access, the companies that own the networks could use their control of the networks to ensure that their customers only have access to their programming. We have already seen cases where cable company owners have used their monopoly control of their networks to exclude programming that competes with their own. Our legislation will contain strong safeguards against such behavior” (124). As we know, the laws called for in the last sentence were never implemented, and it was only in 2015 that the Federal Communication Commission imposed network neutrality. Malcomson could have used his deep knowledge of the history of the Internet to explain why Gore’s vision was not realized, no doubt because of the tensions mentioned above between the groups competing for influence.

    The book concludes that the Internet will increasingly cease to be “an entirely cross border enterprise”(190), but that the benefits of interoperability will result in a global infrastructure being preserved, so that “a fragmented Internet will retain aspects of universality” (197).

    As mentioned above, the book provides an excellent account of much of the historical origins of the World Wide Web and the disparate forces involved in its creation. The book would be even more valuable if it built on that account to analyze more deeply and put into context trends (which it does mention) other than splintering, such as the growing conflict between Apple, Google et al. who want no restrictions on data collection and encryption (so that they can continue to collect and monetize data), governments who want no encryption so they can censor and/or surveil, and governments who recognize that privacy is a human right, that privacy rules should be strengthened, and that end-users should have full ownership and control of their data.

    Readers keen to understand the negative economic impacts of the Internet should read Dan Schiller’s Digital Depression, and readers keen to understand the negative impacts of the Internet on democracy should read Robert McChesney’s Digital Disconnect. This might lead some to believe that we have would up exactly where we didn’t want to be: “government-driven, corporate-interest driven, profit-driven, monopoly-driven.” The citation (from Lyman Chapin, one of the founders of the Internet Society), found on p. 132 of Malcomson’s book, dates back to 1991, and it reflects what the technologists of the time wanted to avoid.

    To conclude, it is worth noting the quotation on page 57 from Norbert Wiener: “Just as the skilled carpenter, the skilled mechanic, the skilled dressmaker have in some degree survived the first industrial revolution, so the skilled scientist and the skilled administrator might survive the second [the cybernetic revolution]. However, taking the second revolution as accomplished, the average human of mediocre attainments has nothing to sell that is worth anyone’s money to buy. The answer, of course, is to have a society based on human values other than buying and selling.”

    Wiener thus foresaw the current fundamental trends and dilemmas that have been well documented and analyzed by Robert McChesney and John Nichols in their new book People Get Ready: The Fight Against a Jobless Economy and a Citizenless Democracy (Nation Books, 2016).

    There can be no doubt that the current trends are largely conditioned by the early history of ICTs (and in particular of the Internet) and its roots in military applications. Thus Splinternet is a valuable source of material that should be carefully considered by all who are involved in Internet policy matters.
    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • The Human Condition and The Black Box Society

    The Human Condition and The Black Box Society

    Frank Pasquale, The Black Box Society (Harvard University Press, 2015)a review of Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
    by Nicole Dewandre
    ~

    1. Introduction

    This review is informed by its author’s specific standpoint: first, a lifelong experience in a policy-making environment, i.e. the European Commission; and, second, a passion for the work of Hannah Arendt and the conviction that she has a great deal to offer to politics and policy-making in this emerging hyperconnected era. As advisor for societal issues at DG Connect, the department of the European Commission in charge of ICT policy at EU level, I have had the privilege of convening the Onlife Initiative, which explored the consequences of the changes brought about by the deployment of ICTs on the public space and on the expectations toward policy-making. This collective thought exercise, which took place in 2012-2013, was strongly inspired by Hannah Arendt’s 1958 book The Human Condition.

    This is the background against which I read the The Black Box Society: The Secret Algorithms Behind Money and Information by Frank Pasquale (references to which are indicated here parenthetically by page number). Two of the meanings of “black box“—a device that keeps track of everything during a flight, on the one hand, and the node of a system that prevents an observer from identifying the link(s) between input and output, on the other hand—serve as apt metaphors for today’s emerging Big Data environment.

    Pasquale digs deep into three sectors that are at the root of what he calls the black box society: reputation (how we are rated and ranked), search (how we use ratings and rankings to organize the world), and finance (money and its derivatives, whose flows depend crucially on forms of reputation and search). Algorithms and Big Data have permeated these three activities to a point where disconnection with human judgment or control can transmogrify them into blind zombies, opening new risks, affordances and opportunities. We are far from the ideal representation of algorithms as support for decision-making. In these three areas, decision-making has been taken over by algorithms, and there is no “invisible hand” ensuring that profit-driven corporate strategies will deliver fairness or improve the quality of life.

    The EU and the US contexts are both distinct and similar. In this review, I shall not comment on Pasquale’s specific policy recommendations in detail, even if as European, I appreciate the numerous references to European law and policy that Pasquale commends as good practices (ranging from digital competition law, to welfare state provision, to privacy policies). I shall instead comment from a meta-perspective, that of challenging the worldview that implicitly undergirds policy-making on both sides of the Atlantic.

    2. A Meta-perspective on The Black Box Society

    The meta-perspective as I see it is itself twofold: (i) we are stuck with Modern referential frameworks, which hinder our ability to attend to changing human needs, desires and expectations in this emerging hyperconnected era, and (ii) the personification of corporations in policymaking reveals shortcomings in the current representation of agents as interest-led beings.

    a) Game over for Modernity!

    As stated by the Onlife Initiative in its “Onlife Manifesto,” through its expression “Game over for Modernity?“, it is time for politics and policy-making to leave Modernity behind. That does not mean going back to the Middle Ages, as feared by some, but instead stepping firmly into this new era that is coming to us. I believe with Genevieve Bell and Paul Dourish that it is more effective to consider that we are now entering into the ubiquitous computing era instead of looking at it as if it was approaching fast.[1] With the miniaturisation of devices and sensors, with mobile access to broadband internet and with the generalized connectivity of objects as well as of people, not only do we witness an increase of the online world, but, more fundamentally, a collapse of the distinction between the online and the offline worlds, and therefore a radically new socio-technico-natural compound. We live in an environment which is increasingly reactive and talkative as a result of the intricate mix between off-line and online universes. Human interactions are also deeply affected by this new socio-technico-natural compound, as they are or will soon be “sticky”, i.e. leave a material trace by default and this for the first time in history. These new affordances and constraints destabilize profoundly our Modern conceptual frameworks, which rely on distinctions that are blurring, such as the one between the real and the virtual or the ones between humans, artefacts and nature, understood with mental categories dating back from the Enlightenment and before. The very expression “post-Modern” is not accurate anymore or is too shy, as it continues to position Modernity as its reference point. It is time to give a proper name to this new era we are stepping into, and hyperconnectivity may be such a name.

    Policy-making however continues to rely heavily on Modern conceptual frameworks, and this not only from the policy-makers’ point of view but more widely from all those engaging in the public debate. There are many structuring features of the Modern conceptual frameworks and it goes certainly beyond this review to address them thoroughly. However, when it comes to addressing the challenges described by The Black Box Society, it is important to mention the epistemological stance that has been spelled out brilliantly by Susan H. Williams in her Truth, Autonomy, and Speech: Feminist Theory and the First Amendment: “the connection forged in Cartesianism between knowledge and power”[2]. Before encountering Susan Williams’s work, I came to refer to this stance less elegantly with the expression “omniscience-omnipotence utopia”[3]. Williams writes that “this epistemological stance has come to be so widely accepted and so much a part of many of our social institutions that it is almost invisible to us” and that “as a result, lawyers and judges operate largely unself-consciously with this epistemology”[4]. To Williams’s “lawyers and judges”, we should add policy-makers and stakeholders.  This Cartesian epistemological stance grounds the conviction that the world can be elucidated in causal terms, that knowledge is about prediction and control, and that there is no limit to what men can achieve provided they have the will and the knowledge. In this Modern worldview, men are considered as rational subjects and their freedom is synonymous with control and autonomy. The fact that we have a limited lifetime and attention span is out of the picture as is the human’s inherent relationality. Issues are framed as if transparency and control is all that men need to make their own way.

    1) One-Way Mirror or Social Hypergravity?

    Frank Pasquale is well aware of and has contributed to the emerging critique of transparency and he states clearly that “transparency is not just an end in itself” (8). However, there are traces of the Modern reliance on transparency as regulative ideal in the Black Box Society. One of them is when he mobilizes the one-way mirror metaphor. He writes:

    We do not live in a peaceable kingdom of private walled gardens; the contemporary world more closely resembles a one-way mirror. Important corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make. (9)

    I refrain from considering the Big Data environment as an environment that “makes sense” on its own, provided someone has access to as much data as possible. In other words, the algorithms crawling the data can hardly be compared to a “super-spy” providing the data controller with an absolute knowledge.

    Another shortcoming of the one-way mirror metaphor is that the implicit corrective is a transparent pane of glass, so the watched can watch the watchers. This reliance on transparency is misleading. I prefer another metaphor that fits better, in my view: to characterise the Big Data environment in a hyperconnected conceptual framework. As alluded to earlier, in contradistinction to the previous centuries and even millennia, human interactions will, by default, be “sticky”, i.e. leave a trace. Evanescence of interactions, which used to be the default for millennia, will instead require active measures to be ensured. So, my metaphor for capturing the radicality and the scope of this change is a change of “social atmosphere” or “social gravity”, as it were. For centuries, we have slowly developed social skills, behaviors and regulations, i.e. a whole ecosystem, to strike a balance between accountability and freedom, in a world where “verba volant and scripta manent[5], i.e. where human interactions took place in an “atmosphere” with a 1g “social gravity”, where they were evanescent by default and where action had to be taken to register them. Now, with all interactions leaving a trace by default, and each of us going around with his, her or its digital shadow, we are drifting fast towards an era where the “social atmosphere” will be of heavier gravity, say “10g”. The challenge is huge and will require a lot of collective learning and adaptation to develop the literacy and regulatory frameworks that will recreate and sustain the balance between accountability and freedom for all agents, human and corporations.

    The heaviness of this new data density stands in-between or is orthogonal to the two phantasms of bright emancipatory promises of Big Data, on the one hand, or frightening fears of Big Brother, on the other hand. Because of this social hypergravity, we, individually and collectively, have indeed to be cautious about the use of Big Data, as we have to be cautious when handling dangerous or unknown substances. This heavier atmosphere, as it were, opens to increased possibilities of hurting others, notably through harassment, bullying and false rumors. The advent of Big Data does not, by itself, provide a “license to fool” nor does it free agents from the need to behave and avoid harming others. Exploiting asymmetries and new affordances to fool or to hurt others is no more acceptable behavior as it was before the advent of Big Data. Hence, although from a different metaphorical standpoint, I support Pasquale’s recommendations to pay increased attention to the new ways the current and emergent practices relying on algorithms in reputation, search and finance may be harmful or misleading and deceptive.

    2) The Politics of Transparency or the Exhaustive Labor of Watchdogging?

    Another “leftover” of the Modern conceptual framework that surfaces in The Black Box Society is the reliance on watchdogging for ensuring proper behavior by corporate agents. Relying on watchdogging for ensuring proper behavior nurtures the idea that it is all right to behave badly, as long as one is not seen doing do. This reinforces the idea that the qualification of an act depends from it being unveiled or not, as if as long as it goes unnoticed, it is all right. This puts the entire burden on the watchers and no burden whatsoever on the doers. It positions a sort of symbolic face-to-face between supposed mindless firms, who are enabled to pursue their careless strategies as long as they are not put under the light and people who are expected to spend all their time, attention and energy raising indignation against wrong behaviors. Far from empowering the watchers, this framing enslaves them to waste time monitoring actors who should be acting in much better ways already. Indeed, if unacceptable behavior is unveiled, it raises outrage, but outrage is far from bringing a solution per se. If, instead, proper behaviors are witnessed, then the watchers are bound to praise the doers. In both cases, watchers are stuck in a passive, reactive and specular posture, while all the glory or the shame is on the side of the doers. I don’t deny the need to have watchers, but I warn against the temptation of relying excessively on the divide between doers and watchers to police behaviors, without engaging collectively in the formulation of what proper and inappropriate behaviors are. And there is no ready-made consensus about this, so that it requires informed exchange of views and hard collective work. As Pasquale explains in an interview where he defends interpretative approaches to social sciences against quantitative ones:

    Interpretive social scientists try to explain events as a text to be clarified, debated, argued about. They do not aspire to model our understanding of people on our understanding of atoms or molecules. The human sciences are not natural sciences. Critical moral questions can’t be settled via quantification, however refined “cost benefit analysis” and other political calculi become. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. That’s a feature, not a bug, of the method: rather than trying to bury normative differences in jargon, it surfaces them.

    The excessive reliance on watchdogging enslaves the citizenry to serve as mere “watchdogs” of corporations and government, and prevents any constructive cooperation with corporations and governments. It drains citizens’ energy for pursuing their own goals and making their own positive contributions to the world, notably by engaging in the collective work required to outline, nurture and maintain the shaping of what accounts for appropriate behaviours.

    As a matter of fact, watchdogging would be nothing more than an exhausting laboring activity.

    b) The Personification of Corporations

    One of the red threads unifying The Black Box Society’s treatment of numerous technical subjects is unveiling the oddness of the comparative postures and status of corporations, on the one hand, and people, on the other hand. As nicely put by Pasquale, “corporate secrecy expands as the privacy of human beings contracts” (26), and, in the meantime, the divide between government and business is narrowing (206). Pasquale points also to the fact that at least since 2001, people have been routinely scrutinized by public agencies to deter the threatening ones from hurting others, while the threats caused by corporate wrongdoings in 2008 gave rise to much less attention and effort to hold corporations to account. He also notes that “at present, corporations and government have united to focus on the citizenry. But why not set government (and its contractors) to work on corporate wrongdoings?” (183) It is my view that these oddnesses go along with what I would call a “sensitive inversion”. Corporations, which are functional beings, are granted sensitivity as if they were human beings, in policy-making imaginaries and narratives, while men and women, who are sensitive beings, are approached in policy-making as if they were functional beings, i.e. consumers, job-holders, investors, bearer of fundamental rights, but never personae per se. The granting of sensitivity to corporations goes beyond the legal aspect of their personhood. It entails that corporations are the one whose so-called needs are taken care of by policy makers, and those who are really addressed to, qua persona. Policies are designed with business needs in mind, to foster their competitiveness or their “fitness”. People are only indirect or secondary beneficiaries of these policies.

    The inversion of sensitivity might not be a problem per se, if it opened pragmatically to an effective way to design and implement policies which bear indeed positive effects for men and women in the end. But Pasquale provides ample evidence showing that this is not the case, at least in the three sectors he has looked at more closely, and certainly not in finance.

    Pasquale’s critique of the hypostatization of corporations and reduction of humans has many theoretical antecedents. Looking at it from the perspective of Hannah Arendt’s The Human Condition illuminates the shortcomings and risks associated with considering corporations as agents in the public space and understanding the consequences of granting them sensitivity, or as it were, human rights. Action is the activity that flows from the fact that men and women are plural and interact with each other: “the human condition of action is plurality”.[6] Plurality is itself a ternary concept made of equality, uniqueness and relationality. First, equality as what we grant to each other when entering into a political relationship. Second, uniqueness refers to the fact that what makes each human a human qua human is precisely that who s/he is is unique. If we treat other humans as interchangeable entities or as characterised by their attributes or qualities, i.e., as a what, we do not treat them as human qua human, but as objects. Last and by no means least, the third component of plurality is the relational and dynamic nature of identity. For Arendt, the disclosure of the who “can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities”[7]. The who appears unmistakably to others, but remains somewhat hidden from the self. It is this relational and revelatory character of identity that confers to speech and action such a critical role and that articulates action with identity and freedom. Indeed, for entities for which the who is partly out of reach and matters, appearance in front of others, notably with speech and action, is a necessary condition of revealing that identity:

    Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: who are you? In acting and speaking, men show who they are, they appear. Revelatory quality of speech and action comes to the fore where people are with others and neither for, nor against them, that is in sheer togetherness.[8]

    So, in this sense, the public space is the arena where whos appear to other whos, personae to other personae.

    For Arendt, the essence of politics is freedom and is grounded in action, not in labour and work. The public space is where agents coexist and experience their plurality, i.e. the fact that they are equal, unique and relational. So, it is much more than the usual American pluralist (i.e., early Dahl-ian) conception of a space where agents worry for exclusively for their own needs by bargaining aggressively. In Arendt’s perspective, the public space is where agents, self-aware of their plural characteristic, interact with each other once their basic needs have been taken care of in the private sphere. As highlighted by Seyla Benhabib in The Reluctant Modernism of Hannah Arendt, “we not only owe to Hannah Arendt’s political philosophy the recovery of the public as a central category for all democratic-liberal politics; we are also indebted to her for the insight that the public and the private are interdependent”.[9] One could not appear in public if s/he or it did not have also a private place, notably to attend to his, her or its basic needs for existence. In Arendtian terms, interactions in the public space take place between agents who are beyond their satiety threshold. Acknowledging satiety is a precondition for engaging with others in a way that is not driven by one’s own interest, but rather by their desire to act together with others—”in sheer togetherness”—and be acknowledged as who they are. If an agent perceives him-, her- or itself and behave only as a profit-maximiser or as an interest-led being, i.e. if s/he or it has no sense of satiety and no self-awareness of the relational and revelatory character of his, her or its identity, then s/he or it cannot be a “who” or an agent in political terms, and therefore, respond of him-, her- or itself. It does simply not deserve -and therefore should not be granted- the status of a persona in the public space.

    It is easy to imagine that there can indeed be no freedom below satiety, and that “sheer togetherness” would just be impossible among agents below their satiety level or deprived from having one. This is however the situation we are in, symbolically, when we grant corporations the status of persona while considering efficient and appropriate that they care only for profit-maximisation. For a business, making profit is a condition to stay alive, as for humans, eating is a condition to stay alive. However, in the name of the need to compete on global markets, to foster growth and to provide jobs, policy-makers embrace and legitimize an approach to businesses as profit-maximisers, despite the fact this is a reductionist caricature of what is allowed by the legal framework on company law[10]. So, the condition for businesses to deserve the status of persona in the public space is, no less than for men and women, to attend their whoness and honour their identity, by staying away from behaving according to their narrowly defined interests. It means also to care for the world as much, if not more, as for themselves.

    This resonates meaningfully with the quotation from Heraclitus that serves as the epigraph for The Black Box Society: “There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own”. Reading Arendt with Heraclitus’s categories of sleep and wakefulness, one might consider that totalitarianism arises—or is not far away—when human beings are awake in private, but asleep in public, in the sense that they silence their humanness or that their humanness is silenced by others when appearing in public. In this perspective, the merging of markets and politics—as highlighted by Pasquale—could be seen as a generalized sleep in the public space of human beings and corporations, qua personae, while all awakened activities are taking place in the private, exclusively driven by their needs and interests.

    In other words—some might find a book like The Black Box Society, which offers a bold reform agenda for numerous agencies, to be too idealistic. But in my view, it falls short of being idealistic enough: there is a missing normative core to the proposals in the book, which can be corrected by democratic, political, and particularly Arendtian theory. If a populace has no acceptance of a certain level of goods and services prevailing as satiating its needs, and if it distorts the revelatory character of identity into an endless pursuit of a limitless growth, it cannot have the proper lens and approach to formulate what it takes to enable the fairness and fair play described in The Black Box Society.

    3. Stepping into Hyperconnectivity

    1) Agents as Relational Selves

    A central feature of the Modern conceptual framework underlying policymaking is the figure of the rational subject as political proxy of humanness. I claim that this is not effective anymore in ensuring a fair and flourishing life for men and women in this emerging hyperconnected era and that we should adopt instead the figure of a “relational self” as it emerges from the Arendtian concept of plurality.

    The concept of the rational subject was forged to erect Man over nature. Nowadays, the problem is not so much to distinguish men from nature, but rather to distinguish men—and women—from artefacts. Robots come close to humans and even outperform them, if we continue to define humans as rational subjects. The figure of the rational subject is torn apart between “truncated gods”—when Reason is considered as what brings eventually an overall lucidity—on the one hand, and “smart artefacts”—when reason is nothing more than logical steps or algorithms—on the other hand. Men and women are neither “Deep Blue” nor mere automatons. In between these two phantasms, the humanness of men and women is smashed. This is indeed what happens in the Kafkaesque and ridiculous situations where a thoughtless and mindless approach to Big Data is implemented, and this from both stance, as workers and as consumers. As far as the working environment is concerned, “call centers are the ultimate embodiment of the panoptic workspace. There, workers are monitored all the time” (35). Indeed, this type of overtly monitored working environment is nothing else that a materialisation of the panopticon. As consumers, we all see what Pasquale means when he writes that “far more [of us] don’t even try to engage, given the demoralizing experience of interacting with cyborgish amalgams of drop- down menus, phone trees, and call center staff”. In fact, this mindless use of automation is only the last version of the way we have been thinking for the last decades, i.e. that progress means rationalisation and de-humanisation across the board. The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human. So, staying stuck with the rational subject as a proxy for humanness, either ends up in smashing our humanness as workers and consumers and, at best, leads to absurd situations where to be free would mean spending all our time controlling we are not controlled.

    As a result, keeping the rational subject as the central representation of humanness will increasingly be misleading politically speaking. It fails to provide a compass for treating each other fairly and making appropriate decisions and judgments, in order to impacting positively and meaningfully on human lives.

    With her concept of plurality, Arendt offers an alternative to the rational subject for defining humanness: that of the relational self. The relational self, as it emerges from the Arendtian’s concept of plurality[11], is the man, woman or agent self-aware of his, her or its plurality, i.e. the facts that (i) he, she or it is equal to his, her or its fellows; (ii) she, he or it is unique as all other fellows are unique; and (iii) his, her or its identity as a revelatory character requiring to appear among others in order to reveal itself through speech and action. This figure of the relational self accounts for what is essential to protect politically in our humanness in a hyperconnected era, i.e. that we are truly interdependent from the mutual recognition that we grant to each other and that our humanity is precisely grounded in that mutual recognition, much more than in any “objective” difference or criteria that would allow an expert system to sort out human from non-human entities.

    The relational self, as arising from Arendt’s plurality, combines relationality and freedom. It resonates deeply with the vision proposed by Susan H. Williams, i.e. the relational model of truth and the narrative model to autonomy, in order to overcome the shortcomings of the Cartesian and liberal approaches to truth and autonomy without throwing the baby, i.e. the notion of agency and responsibility, out with the bathwater, as the social constructionist and feminist critique of the conceptions of truth and autonomy may be understood of doing.[12]

    Adopting the relational self as the canonical figure of humanness instead of the rational subject‘s one puts under the light the direct relationship between the quality of interactions, on the one hand, and the quality of life, on the other hand. In contradistinction with transparency and control, which are meant to empower non-relational individuals, relational selves are self-aware that they are in need of respect and fair treatment from others, instead. It also makes room for vulnerability, notably the vulnerability of our attentional spheres, and saturation, i.e. the fact that we have a limited attention span, and are far from making a “free choice” when clicking on “I have read and accept the Terms & Conditions”. Instead of transparency and control as policy ends in themselves, the quality of life of relational selves and the robustness of the world they construct together and that lies between them depend critically on being treated fairly and not being fooled.

    It is interesting to note that the word “trust” blooms in policy documents, showing that the consciousness of the fact that we rely from each other is building up. Referring to trust as if it needed to be built is however a signature of the fact that we are in transition from Modernity to hyperconnectivity, and not yet fully arrived. By approaching trust as something that can be materialized we look at it with Modern eyes. As “consent is the universal solvent” (35) of control, transparency-and-control is the universal solvent of trust. Indeed, we know that transparency and control nurture suspicion and distrust. And that is precisely why they have been adopted as Modern regulatory ideals. Arendt writes: “After this deception [that we were fooled by our senses], suspicions began to haunt Modern man from all sides”[13]. So, indeed, Modern conceptual frameworks rely heavily on suspicion, as a sort of transposition in the realm of human affairs of the systematic doubt approach to scientific enquiries. Frank Pasquale quotes moral philosopher Iris Murdoch for having said: “Man is a creature who makes pictures of himself and then comes to resemble the picture” (89). If she is right—and I am afraid she is—it is of utmost importance to shift away from picturing ourselves as rational subjects and embrace instead the figure of relational selves, if only to save the fact that trust can remain a general baseline in human affairs. Indeed, if it came true that trust can only be the outcome of a generalized suspicion, then indeed we would be lost.

    Besides grounding the notion of relational self, the Arendtian concept of plurality allows accounting for interactions among humans and among other plural agents, which are beyond fulfilling their basic needs (necessity) or achieving goals (instrumentality), and leads to the revelation of their identities while giving rise to unpredictable outcomes. As such, plurality enriches the basket of representations for interactions in policy making. It brings, as it were, a post-Modern –or should I dare saying a hyperconnected- view to interactions. The Modern conceptual basket for representations of interactions includes, as its central piece, causality. In Modern terms, the notion of equilibrium is approached through a mutual neutralization of forces, either with the invisible hand metaphor, or with Montesquieu’s division of powers. The Modern approach to interactions is either anchored into the representation of one pole being active or dominating (the subject) and the other pole being inert or dominated (nature, object, servant) or, else, anchored in the notion of conflicting interests or dilemmas. In this framework, the notion of equality is straightjacketed and cannot be embodied. As we have seen, this Modern straitjacket leads to approaching freedom with control and autonomy, constrained by the fact that Man is, unfortunately, not alone. Hence, in the Modern approach to humanness and freedom, plurality is a constraint, not a condition, while for relational selves, freedom is grounded in plurality.

    2) From Watchdogging to Accountability and Intelligibility

    If the quest for transparency and control is as illusory and worthless for relational selves, as it was instrumental for rational subjects, this does not mean that anything goes. Interactions among plural agents can only take place satisfactorily if basic and important conditions are met.  Relational selves are in high need of fairness towards themselves and accountability of others. Deception and humiliation[14] should certainly be avoided as basic conditions enabling decency in the public space.

    Once equipped with this concept of the relational self as the canonical figure of what can account for political agents, be they men, women, corporations and even States. In a hyperconnected era, one can indeed see clearly why the recommendations Pasquale offers in his final two chapters “Watching (and Improving) the Watchers” and “Towards an Intelligible Society,” are so important. Indeed, if watchdogging the watchers has been criticized earlier in this review as an exhausting laboring activity that does not deliver on accountability, improving the watchers goes beyond watchdogging and strives for a greater accountability. With regard to intelligibility, I think that it is indeed much more meaningful and relevant than transparency.

    Pasquale invites us to think carefully about regimes of disclosure, along three dimensions:  depth, scope and timing. He calls for fair data practices that could be enhanced by establishing forms of supervision, of the kind that have been established for checking on research practices involving human subjects. Pasquale suggests that each person is entitled to an explanation of the rationale for the decision concerning them and that they should have the ability to challenge that decision. He recommends immutable audit logs for holding spying activities to account. He calls also for regulatory measures compensating for the market failures arising from the fact that dominant platforms are natural monopolies. Given the importance of reputation and ranking and the dominance of Google, he argues that the First Amendment cannot be mobilized as a wild card absolving internet giants from accountability. He calls for a “CIA for finance” and a “Corporate NSA,” believing governments should devote more effort to chasing wrongdoings from corporate actors. He argues that the approach taken in the area of Health Fraud Enforcement could bear fruit in finance, search and reputation.

    What I appreciate in Pasquale’s call for intelligibility is that it does indeed calibrate the needs of relational selves to interact with each other, to make sound decisions and to orient themselves in the world. Intelligibility is different from omniscience-omnipotence. It is about making sense of the world, while keeping in mind that there are different ways to do so. Intelligibility connects relational selves to the world surrounding them and allows them to act with other and move around. In the last chapter, Pasquale mentions the importance of restoring trust and the need to nurture a public space in the hyperconnected era. He calls for an end game to the Black Box. I agree with him that conscious deception inherently dissolves plurality and the common world, and needs to be strongly combatted, but I think that a lot of what takes place today goes beyond that and is really new and unchartered territories and horizons for humankind. With plurality, we can also embrace contingency in a less dramatic way that we used to in the Modern era. Contingency is a positive approach to un-certainty. It accounts for the openness of the future. The very word un-certainty is built in such a manner that certainty is considered the ideal outcome.

    4. WWW, or Welcome to the World of Women or a World Welcoming Women[15]

    To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

    But this situation may be looked at more optimistically as an opportunity for women’s voices and thoughts to go mainstream and be listened to. Now that equality between women and men is enshrined in the political and legal systems of the EU and the US, concretely, women have been admitted to the status of “rational subject”, but that does not dissolve its masculine origin, and the oddness or uneasiness for women to embrace this figure. Indeed, it was forged by men with men in mind, women, for those men, being indexed on nature. Mainstreaming the figure of the relational self, born in the mind of Arendt, will be much more inspiring and empowering for women, than was the rational subject. In fact, this enhances their agency and the performativity of their thoughts and theories. So, are we heading towards a world welcoming women?

    In conclusion, the advent of Big Data can be looked at in two ways. The first one is to look at it as the endpoint of the materialisation of all the promises and fears of Modern times. The second one is to look at it as a wake-up call for a new beginning; indeed, by making obvious the absurdity or the price of going all the way down to the consequences of the Modern conceptual frameworks, it calls on thinking on new grounds about how to make sense of the human condition and make it thrive. The former makes humans redundant, is self-fulfilling and does not deserve human attention and energy. Without any hesitation, I opt for the latter, i.e. the wake-up call and the new beginning.

    Let’s engage in this hyperconnected era bearing in mind Virginia Woolf’s “Think we must”[16] and, thereby, shape and honour the human condition in the 21st century.
    _____

    Nicole Dewandre has academic degrees in engineering, economics and philosophy. She is a civil servant in the European Commission, since 1983. She was advisor to the President of the Commission, Jacques Delors, between 1986 and 1993. She then worked in the EU research policy, promoting gender equality, partnership with civil society and sustainability issues. Since 2011, she has worked on the societal issues related to the deployment of ICT technologies. She has published widely on organizational and political issues relating to ICTs.

    The views expressed in this article are the sole responsibility of the author and in no way represent the view of the European Commission and its services.

    Back to the essay
    _____

    Acknowledgments: This review has been made possible by the Faculty of Law of the University of Maryland in Baltimore, who hosted me as a visiting fellow for the month of September 2015. I am most grateful to Frank Pasquale, first for having written this book, but also for engaging with me so patiently over the month of September and paying so much attention to my arguments, even suggesting in some instances the best way for making my points, when I was diverging from his views. I would also like to thank Jérôme Kohn, director of the Hannah Arendt Center at the New School for Social Research, for his encouragements in pursuing the mobilisation of Hannah Arendt’s legacy in my professional environment. I am also indebted, and notably for the conclusion, to the inspiring conversations I have had with Shauna Dillavou, excecutive director of CommunityRED, and Soraya Chemaly, Washington-based feminist writer, critic and activist. Last, and surely not least, I would like to thank David Golumbia for welcoming this piece in his journal and for the care he has put in editing this text written by a non-English native speaker.

    [1] This change of perspective, in itself, has the interesting side effect to take the carpet under the feet of those “addicted to speed”, as Pasquale is right when he points to this addiction (195) as being one of the reasons “why so little is being done” to address the challenges arising from the hyperconnected era.

    [2] Williams, Truth, Autonomy, and Speech, New York: New York University Press, 2004 (35).

    [3] See, e.g., Nicole Dewandre, ‘Rethinking the Human Condition in a Hyperconnected Era: Why Freedom Is Not About Sovereignty But About Beginnings’, in The Onlife Manifesto, ed. Luciano Floridi, Springer International Publishing, 2015 (195–215).

    [4]Williams, Truth, Autonomy, and Speech (32).

    [5] Literally: “spoken words fly; written ones remain”

    [6] Apart from action, Arendt distinguishes two other fundamental human activities that together with action account for the vita activa. These two other activities are labour and work. Labour is the activity that men and women engage in to stay alive, as organic beings: “the human condition of labour is life itself”. Labour is totally pervaded by necessity and processes. Work is the type of activity men and women engage with to produce objects and inhabit the world: “the human condition of work is worldliness”. Work is pervaded by a means-to-end logic or an instrumental rationale.

    [7] Arendt, The Human Condition, 1958; reissued, University of Chicago Press, 1998 (159).

    [8] Arendt, The Human Condition (160).

    [9] Seyla Benhabib, The Reluctant Modernism of Hannah Arendt, Revised edition, Lanham, MD: Rowman & Littlefield Publishers, 2003, (211).

    [10] See notably the work of Lynn Stout and the Frank Bold Foundation’s project on the purpose of corporations.

    [11] This expression has been introduced in the Onlife Initiative by Charles Ess, but in a different perspective. The Ess’ relational self is grounded in pre-Modern and Eastern/oriental societies. He writes: “In “Western” societies, the affordances of what McLuhan and others call “electric media,” including contemporary ICTs, appear to foster a shift from the Modern Western emphases on the self as primarily rational, individual, and thereby an ethically autonomous moral agent towards greater (and classically “Eastern” and pre-Modern) emphases on the self as primarily emotive, and relational—i.e., as constituted exclusively in terms of one’s multiple relationships, beginning with the family and extending through the larger society and (super)natural orders”. Ess, in Floridi, ed.,  The Onlife Manifesto (98).

    [12] Williams, Truth, Autonomy, and Speech.

    [13] Hannah Arendt and Jerome Kohn, Between Past and Future, Revised edition, New York: Penguin Classics, 2006 (55).

    [14] See Richard Rorty, Contingency, Irony, and Solidarity, New York: Cambridge University Press, 1989.

    [15] I thank Shauna Dillavou for suggesting these alternate meanings for “WWW.”

    [16] Virginia Woolf, Three Guineas, New York: Harvest, 1966.

  • "Still Ahead Somehow:" Paul Amar’s The Security Archipelago

    "Still Ahead Somehow:" Paul Amar’s The Security Archipelago

    A Review of Paul Amar’s The Security Archipelago: Human-Security States, Sexuality Politics, and the End of Neoliberalism (Durham and London: Duke University Press, 2013).

    By Neel Ahuja

    One of the most widely reported news stories of the 2011 revolution in Egypt involved sexual assaults and other physical attacks on women in Cairo’s Tahrir Square, where mass protests led to the ouster of former President Hosni Mubarak. Paul Amar’s singular book The Security Archipelago explores, among other topics, the Egyptian military council’s attempt to burnish its own authority to “rescue the nation” and its “dignity” by constructing the Arab Spring uprising as a destructive site of violence and moral degradation (3). Mirroring the racialized discourse of international news media who invoked animal metaphors to represent dissent at Tahrir as an articulation of pathological urban violence and frenzy (203), the counter-revolutionary campaign allowed the military to arrest and incarcerate protesters by associating them with demeaned markers of class status and sexuality.

    For Amar, this conjunction of moralizing statism and the militarization of social life is indicative of a particular governmental form he calls “human security,” a set of transnational juridical, political, economic, and police practices and discourses that become especially legible in sites of urban crisis and struggle. Amar names four interlocking logics that constitute human security: evangelical humanitarianism, police paramilitarism, juridical personalism, and workerist empowerment (7). He unveils these logics by constructing a dense analysis of security politics linking the megacities of Cairo and Rio de Janiero.

    The chapters explore crisis moments that reveal connections between the militarization of police, the development of urban planning and development policy, tourism, the management of labor processes, and racialized and gendered struggles over rights and citizenship. Such connections arise in crises around public protest, attempts by municipal and national authorities to market heritage (in the form of Islamic heritage architecture or samba music) to tourists, coalitions between labor and evangelical Christian groups to combat trafficking and corruption, the attempts of 9/11 plotter Muhammad Atta to develop a theory of Islamic urban planning, and the policing of city space during major international development meetings. These wide-ranging case studies ground the book’s critical security analysis in sites of struggle, making important contributions to the understanding of the spread of urban violence and progressive social policy in Brazil and the rise of left-right coalitions in Islamic urban planning and revolutionary uprisings in Egypt.

    Throughout the book, public contestation over the permissible limits of urban sexuality emerges as a key factor inciting securitization. It serves as a marker of cultural tradition, a policed indicator of urban space and capital networking, and a marker of political dissent. For Amar, the new subjects of security “are portrayed as victimized by trafficking, prostituted by ‘cultures of globalization,’ sexually harassed by ‘street’ forms of predatory masculinity, or ‘debauched’ by liberal values” (15). In this way, the “human” at the heart of “human security” is a figure rendered precarious by the public articulation of sexuality with processes of economic and social change.

    If this method of transnational scholarship showcases the unique strengths of Amar’s interdisciplinary training, Portuguese and Arabic language skills, and past work as a development specialist, it brilliantly articulates a set of connections between the cities of Rio and Cairo evident in their parallel experiences of neoliberal economic policies, redevelopment, militarization of policing, NGO intervention, and rise as significant “semiperipheral” or “first-third-world” metropoles. In contrast to racialized international relations and conflict studies scholarship that fails continually to break from the mythologies of the clash of civilizations, Amar’s book offers a fascinating analysis of how religious politics, policing, and workerist humanisms interface in the urban crises of two megacities whose representation if often overwritten by stereotyped descriptions of either oriental despotism (Cairo) or tropicalist transgression (Rio).

    These cities, in fact, share geographic, economic, and political connections that justify what Amar describes as an archipelagic method: “The practices, norms, and institutional products of [human security] struggles have… traveled across an archipelago, a metaphorical island chain, of what the private security industry calls ‘hotspots’–enclaves of panic and laboratories of control–the most hypervisible of which have emerged in Global South megacities” (15-16). The security archipelago is also a formation that includes but transcends the state; it is “parastatal” and reflects the ways in which states in the Global South, NGO activists, and state attempts to humanize security interventions have produced a set of governmentalities that attempt to incorporate and govern public challenges to austerity politics and militarism.

    As such, Amar’s book offers a two-pronged challenge to dominant theories of neoliberalism. First, it clarifies that although many of the wealthy countries still battle over a politics of austerity, the so-called Washington Consensus combining financial deregulation, privatization, and reduction of trade barriers no longer holds sway internationally or even in its spaces of origin. Indeed, Amar claims that even the Beijing Consensus — the turn since the 1990s to a strong state hand in development investment combined with the controlled growth of highly regulated markets — is being supplanted by the parastatal form of the human security regime. Second, this line of thought requires for Amar a methodological shift. Amar claims, “we can envision an end to the term neoliberalism as an overburdened and overextended interpretive lens for scholars” given “the demise, in certain locations and circuits, of a hegemonic set of market-identified subjects, locations, and ideologies of politics” (236). The Security Archipelago offers an alternative to theories of globalization that privilege imperial states as the primary forces governing the production of transnational power dynamics. Without making the common move of romanticizing a static vision of either locality or indigeneity in the conceptualization of resistance to globalization, Amar locates in the semiperiphery a crossroads between the forces of national development and transnational capital. It is in this crossroads where resistances to the violence of austerity are parlayed into new security regimes in the name of the very human endangered by capitalism’s market authoritarianism.

    It is notable that the analysis of sexuality, with its attendant moral incitements to security, largely drops out of Amar’s concluding analysis of the debates on the end of neoliberalism. He does mention sexuality when proclaiming a shift from a consuming subject to a worker in the postneoliberal transition: “postneoliberal work centers more on the fashioning of moralization, care, humanization, viable sexualities, and territories that can be occupied. And the worker can see production as the collective work of vigilance and purification, which all too often is embedded through paramilitarization and enforcement practices” (243). While the book expertly reveals the emphasis on emergent forms of moral labor and securitizing care in the public regulation of sexuality, it also documents that moral crises and policing around the sexuality of samba, for example, are layered by the nexus of gentrification, private redevelopment, and transnational tourism that commonly attract the label neoliberalism. This point does not directly undermine Amar’s argument but suggests that further discussion of sexuality’s relation to human security regimes might engender an analytic revision of the notion of postneoliberal transition. The public articulation of sexuality as the site of urban securitization might rather reveal the regeneration of intersecting consumption forms and affective labors of logics of marketization and securitization that are divided geographically but dynamically interrelated.

    The fact that Amar’s book raises this problem reveals the significance of the study for moving forward scholarship on sexuality, security, and globality — as individual objects of study and intertwined ones. As scholars focusing, for example, on homonationalist marriage practices in the global north continue to use the analytic frame of neoliberalism, Amar’s study might press for how the moral articulation of the marriage imperative exerts a securitizing force that transcends market logics. Similarly, Amar’s focus on both sexuality and the semiperiphery offer significant geographic and methodological disruptions to the literatures on neoliberalism, the rise of East Asian financial capital, and crisis theory. His unique method challenges interdisciplinary social theorizing to grapple with the archipelagic nature of contemporary forces of social precarity and securitization.

    Neel Ahuja is associate professor of postcolonial studies in the Department of English and Comparative Literature at UNC. He is the author the forthcoming Bioinsecurities: Disease Interventions, Empire, and the Government of Species (Duke UP).

  • The Ground Beneath the Screens

    The Ground Beneath the Screens

    Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015)Jussi Parikka, The Anthrobscene (University of Minnesota Press, 2015)a review of Jussi Parikka, A Geology of Media (University of Minnesota Press, 2015) and The Anthrobscene (University of Minnesota Press, 2015)
    by Zachary Loeb

    ~

     

     

     

     

    Despite the aura of ethereality that clings to the Internet, today’s technologies have not shed their material aspects. Digging into the materiality of such devices does much to trouble the adoring declarations of “The Internet Is the Answer.” What is unearthed by digging is the ecological and human destruction involved in the creation of the devices on which the Internet depends—a destruction that Jussi Parikka considers an obscenity at the core of contemporary media.

    Parikka’s tale begins deep below the Earth’s surface in deposits of a host of different minerals that are integral to the variety of devices without which you could not be reading these words on a screen. This story encompasses the labor conditions in which these minerals are extracted and eventually turned into finished devices, it tells of satellites, undersea cables, massive server farms, and it includes a dark premonition of the return to the Earth which will occur following the death (possibly a premature death due to planned obsolescence) of the screen at which you are currently looking.

    In a connected duo of new books, The Anthrobscene (referenced below as A) and A Geology of Media (referenced below as GM), media scholar Parikka wrestles with the materiality of the digital. Parikka examines the pathways by which planetary elements become technology, while considering the transformations entailed in the anthropocene, and artistic attempts to render all of this understandable. Drawing upon thinkers ranging from Lewis Mumford to Donna Haraway and from the Situationists to Siegfried Zielinski – Parikka constructs a way of approaching media that emphasizes that it is born of the Earth, borne upon the Earth, and fated eventually to return to its place of origin. Parikka’s work demands that materiality be taken seriously not only by those who study media but also by all of those who interact with media – it is a demand that the anthropocene must be made visible.

    Time is an important character in both The Anthrobscene and A Geology of Media for it provides the context in which one can understand the long history of the planet as well as the scale of the years required for media to truly decompose. Parikka argues that materiality needs to be considered beyond a simple focus upon machines and infrastructure, but instead should take into account “the idea of the earth, light, air, and time as media” (GM 3). Geology is harnessed as a method of ripping open the black box of technology and analyzing what the components inside are made of – copper, lithium, coltan, and so forth. The engagement with geological materiality is key for understanding the environmental implications of media, both in terms of the technologies currently in circulation and in terms of predicting the devices that will emerge in the coming years. Too often the planet is given short shrift in considerations of the technical, but “it is the earth that provides for media and enables it”, it is “the affordances of its geophysical reality that make technical media happen” (GM 13). Drawing upon Mumford’s writings about “paleotechnics” and “neotechnics” (concepts which Mumford had himself adapted from the work of Patrick Geddes), Parikka emphasizes that both the age of coal (paleotechnics) and the age of electricity (neotechnics) are “grounded in the wider mobilization of the materiality of the earth” (GM 15). Indeed, electric power is often still quite reliant upon the extraction and burning of coal.

    More than just a pithy neologism, Parikka introduces the term “anthrobscene” to highlight the ecological violence inherent in “the massive changes human practices, technologies, and existence have brought across the ecological board” (GM 16-17) shifts that often go under the more morally vague title of “the anthropocene.” For Parikka, “the addition of the obscene is self-explanatory when one starts to consider the unsustainable, politically dubious, and ethically suspicious practices that maintain technological culture and its corporate networks” (A 6). Like a curse word beeped out by television censors, much of the obscenity of the anthropocene goes unheard even as governments and corporations compete with ever greater élan for the privilege of pillaging portions of the planet – Parikka seeks to reinscribe the obscenity.

    The world of high tech media still relies upon the extraction of metals from the earth and, as Parikka shows, a significant portion of the minerals mined today are destined to become part of media technologies. Therefore, in contemplating geology and media it can be fruitful to approach media using Zielinski’s notion of “deep time” wherein “durations become a theoretical strategy of resistance against the linear progress myths that impose a limited context for understanding technological change” (GM 37, A 23). Deploying the notion of “deep time” demonstrates the ways in which a “metallic materiality links the earth to the media technological” while also emphasizing the temporality “linked to the nonhuman earth times of decay and renewal” (GM 44, A 30). Thus, the concept of “deep time” can be particularly useful in thinking through the nonhuman scales of time involved in media, such as the centuries required for e-waste to decompose.

    Whereas “deep time” provides insight into media’s temporal quality, “psychogeophysics” presents a method for thinking through the spatial. “Psychogeophysics” is a variation of the Situationist idea of “the psychogeographical,” but where the Situationists focused upon the exploration of the urban environment, “psychogeophysics” (which appeared as a concept in a manifesto in Mute magazine) moves beyond the urban sphere to contemplate the oblate spheroid that is the planet. What the “geophysical twist brings is a stronger nonhuman element that is nonetheless aware of the current forms of exploitation but takes a strategic point of view on the nonorganic too” (GM 64). Whereas an emphasis on the urban winds up privileging the world built by humans, the shift brought by “psychogeophysics” allows people to bear witness to “a cartography of architecture of the technological that is embedded in the geophysical” (GM 79).

    The material aspects of media technology consist of many areas where visibility has broken down. In many cases this is suggestive of an almost willful disregard (ignoring exploitative mining and labor conditions as well as the harm caused by e-waste), but in still other cases it is reflective of the minute scales that materiality can assume (such as metallic dust that dangerously fills workers’ lungs after they shine iPad cases). The devices that are surrounded by an optimistic aura in some nations, thus obtain this sheen at the literal expense of others: “the residue of the utopian promise is registered in the soft tissue of a globally distributed cheap labor force” (GM 89). Indeed, those who fawn with religious adoration over the newest high-tech gizmo may simply be demonstrating that nobody they know personally will be sickened in assembling it, or be poisoned by it when it becomes e-waste. An emphasis on geology and materiality, as Parikka demonstrates, shows that the era of digital capitalism contains many echoes of the exploitation characteristic of bygone periods – appropriation of resources, despoiling of the environment, mistreatment of workers, exportation of waste, these tragedies have never ceased.

    Digital media is excellent at creating a futuristic veneer of “smart” devices and immaterial sounding aspects such as “the cloud,” and yet a material analysis demonstrates the validity of the old adage “the more things change the more they stay the same.” Despite efforts to “green” digital technology, “computer culture never really left the fossil (fuel) age anyway” (GM 111). But beyond relying on fossil fuels for energy, these devices can themselves be considered as fossils-to-be as they go to rest in dumps wherein they slowly degrade, so that “we can now ask what sort of fossil layer is defined by the technical media condition…our future fossils layers are piling up slowly but steadily as an emblem of an apocalypse in slow motion” (GM 119). We may not be surrounded by dinosaurs and trilobites, but the digital media that we encounter are tomorrow’s fossils – which may be quite mysterious and confounding to those who, thousands of years hence, dig them up. Businesses that make and sell digital media thrive on a sense of time that consists of planned obsolescence, regular updates, and new products, but to take responsibility for the materiality of these devices requires that “notions of temporality must escape any human-obsessed vocabulary and enter into a closer proximity with the fossil” (GM 135). It requires a woebegone recognition that our technological detritus may be present on the planet long after humanity has vanished.

    The living dead that lurch alongside humanity today are not the zombies of popular entertainment, but the undead media devices that provide the screens for consuming such distractions. Already fossils, bound to be disposed of long before they stop working, it is vital “to be able to remember that media never dies, but remains as toxic residue,” and thus “we should be able to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41). We live with these zombies, we live among them, and even when we attempt to pack them off to unseen graveyards they survive under the surface. A Geology of Media is thus “a call for further materialization of media not only as media but as that bit which it consists of: the list of the geophysical elements that give us digital culture” (GM 139).

    It is not simply that “machines themselves contain a planet” (GM 139) but that the very materiality of the planet is becoming riddled with a layer of fossilized machines.

    * * *

    The image of the world conjured up by Parikka in A Geology of Media and The Anthrobscene is far from comforting – after all, Parikka’s preference for talking about “the anthrobscene” does much to set a funereal tone. Nevertheless, these two books by Parikka do much to demonstrate that “obscene” may be a very fair word to use when discussing today’s digital media. By emphasizing the materiality of media, Parikka avoids the thorny discussions of the benefits and shortfalls of various platforms to instead pose a more challenging ethical puzzle: even if a given social media platform can be used for ethical ends, to what extent is this irrevocably tainted by the materiality of the device used to access these platforms? It is a dark assessment which Parikka describes without much in the way of optimistic varnish, as he describes the anthropocene (on the first page of The Anthrobscene) as “a concept that also marks the various violations of environmental and human life in corporate practices and technological culture that are ensuring that there won’t be much of humans in the future scene of life” (A 1).

    And yet both books manage to avoid the pitfall of simply coming across as wallowing in doom. Parikka is not pining for a primal pastoral fantasy, but is instead seeking to provide new theoretical tools with which his readers can attempt to think through the materiality of media. Here, Parikka’s emphasis on the way that digital technology is still heavily reliant upon mining and fossil fuels acts as an important counter to gee-whiz futurism. Similarly Parikka’s mobilization of the notion of “deep time” and fossils acts as an important contribution to thinking through the lifecycles of digital media. Dwelling on the undeath of a smartphone slowly decaying in an e-waste dump over centuries is less about evoking a fearful horror than it is about making clear the horribleness of technological waste. The discussion of “deep time” seems like it can function as a sort of geological brake on accelerationist thinking, by emphasizing that no matter how fast humans go, the planet has its own sense of temporality. Throughout these two slim books, Parikka draws upon a variety of cultural works to strengthen his argument: ranging from the earth-pillaging mad scientist of Arthur Conan Doyle’s Professor Challenger, to the Coal Fired Computers of Yokokoji-Harwood (YoHa), to Molleindustria’s smartphone game “Phone Story” which plays out on a smartphone’s screen the tangles of extraction, assembly, and disposal that are as much a part of the smartphone’s story as whatever uses for which the final device is eventually used. Cultural and artistic works, when they intend to, may be able to draw attention to the obscenity of the anthropocene.

    The Anthrobscene and A Geology of Media are complementary texts, but one need not read both in order to understand the other. As part of the University of Minnesota Press’s “Forerunners” series, The Anthrobscene is a small book (in terms of page count and physical size) which moves at a brisk pace, in some ways it functions as a sort of greatest hits version of A Geology of Media – containing many of the essential high points, but lacking some of the elements that ultimately make A Geology of Media a satisfying and challenging book. Yet the duo of books work wonderfully together as The Anthrobscene acts as a sort of primer – that a reader of both books will detect many similarities between the two is not a major detraction, for these books tell a story that often goes unheard today.

    Those looking for neat solutions to the anthropocene’s quagmire will not find them in either of these books – and as these texts are primarily aimed at an academic audience this is not particularly surprising. These books are not caught up in offering hope – be it false or genuine. At the close of A Geology of Media when Parikka discusses the need “to repurpose and reuse solutions in new ways, as circuit bending and hardware hacking practices imply” (A 41) – this does not appear as a perfect panacea but as way of possibly adjusting. Parikka is correct in emphasizing the ways in which the extractive regimes that characterized the paleotechnic continue on in the neotechnic era, and this is a point which Mumford himself made regarding the way that the various “technic” eras do not represent clean breaks from each other. As Mumford put it, “the new machines followed, not their own pattern, but the pattern laid down by previous economic and technical structures” (Mumford 2010, 236) – in other words, just as Parikka explains, the paleotechnic survives well into the neotechnic. The reason this is worth mentioning is not to challenge Parikka, but to highlight that the “neotechnic” is not meant as a characterization of a utopian technical epoch that has parted ways with the exploitation that had characterized the preceding period. For Mumford the need was to move beyond the anthropocentrism of the neotechnic period and move towards what he called (in The Culture of Cities) the “biotechnic” a period wherein “technology itself will be oriented toward the culture of life” (Mumford 1938, 495). Granted, as Mumford’s later work and as these books by Parikka make clear – instead of arriving at the “biotechnic” what we might get is instead the anthrobscene. And reading these books by Parikka makes it clear that one could not characterize the anthrobscene as being “oriented toward the culture of life” – indeed, it may be exactly the opposite. Or, to stick with Mumford a bit longer, it may be that the anthrobscene is the result of the triumph of “authoritarian technics” over “democratic” ones. Nevertheless, the true dirge like element of Parikka’s books is that they raise the possibility that it may well be too late to shift paths – that the neotechnic was perhaps just a coat of fresh paint applied to hide the rusting edifice of paleotechnics.

    A Geology of Media and The Anthrobscene are conceptual toolkits, they provide the reader with the drills and shovels they need to dig into the materiality of digital media. But what these books make clear is that along with the pickaxe and the archeologist’s brush, if one is going to dig into the materiality of media one also needs a gasmask if one is to endure the noxious fumes. Ultimately, what Parikka shows is that the Situationist inspired graffiti of May 1968 “beneath the streets – the beach” needs to be rewritten in the anthrobscene.

    Perhaps a fitting variation for today would read: “beneath the streets – the graveyard.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Mumford, Lewis. 2010. Technics and Civilization. Chicago: University of Chicago Press.

    Mumford, Lewis. 1938. The Culture of Cities. New York: Harcourt, Brace and Company.

  • Dissecting the “Internet Freedom” Agenda

    Dissecting the “Internet Freedom” Agenda

    Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedoma review of Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedom  (University of Illinois Press, 2015)
    by Richard Hill
    ~
    Disclosure: the author of this review is thanked in the Preface of the book under review.

    Both radical civil society organizations and mainstream defenders of the status quo agree that the free and open Internet is threatened: see for example the Delhi Declaration, Bob Hinden’s 2014 Year End Thoughts, and Kathy Brown’s March 2015 statement at a UNESCO conference. The threats include government censorship and mass surveillance, but also the failure of governments to control rampant industry concentration and commercial exploitation of personal data, which increasingly takes the form of providing “free” services in exchange for personal information that is resold at a profit, or used to provide targeted advertising, also at a profit.

    In Digital Disconnect, Robert McChesney has explained how the Internet, which was supposed to be a force for the improvement of human rights and living conditions, has been used to erode privacy and to increase the concentration of economic power, to the point where it is becoming a threat to democracy. In Digital Depression, Dan Schiller has documented how US policies regarding the Internet have favored its geo-economic and geo-political goals, in particular the interests of its large private companies that dominate the information and communications technology (ICT) sector worldwide.

    Shawn M. Powers and Michael Jablonski’s seminal new book The Real Cyber War takes us further down the road of understanding what went wrong, and what might be done to correct the situation. Powers, an assistant professor at Georgia State University, specializes in international political communication, with particular attention to the geopolitics of information and information technologies. Jablonski is an attorney and presidential fellow, also at Georgia State.

    There is a vast literature on internet governance (see for example the bibliography in Radu, Chenou, and Weber, eds., The Evolution of Global Internet Governance), but much of it is ideological and normative: the author espouses a certain point of view, explains why that point of view is good, and proposes actions that would lead to the author’s desired outcome (a good example is Milton Mueller’s well researched but utopian Networks and States). There is nothing wrong with that approach: on the contrary, such advocacy is necessary and welcome.

    But a more detached analytical approach is also needed, and Powers and Jablonski provide exactly that. Their objective is to help us understand (citing from p. 19 of the paperback edition) “why states pursue the policies they do”. The book “focuses centrally on understanding the numerous ways in which power and control are exerted in cyberspace” (p. 19).

    Starting from the rather obvious premise that states compete to shape international policies that favor their interests, and using the framework of political economy, the authors outline the geopolitical stakes and show how questions of power, and not human rights, are the real drivers of much of the debate about Internet governance. They show how the United States has deliberately used a human rights discourse to promote policies that further its geo-economic and geo-political interests. And how it has used subsidies and government contracts to help its private companies to acquire or maintain dominant positions in much of the ICT sector.

    Jacob Silverman has decried the “the misguided belief that once power is arrogated away from doddering governmental institutions, it will somehow find itself in the hands of ordinary people”. Powers and Jablonski dissect the mechanisms by which vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.

    In particular, they show how a “freedom to connect” narrative is used by the USA to attempt to transform information and personal data into commercial commodities that should be subject to free trade. Yet all states (including the US) regulate, at least to some extent, the flow of information within and across their borders. If information is the “new oil” of our times, then it is not surprising that states wish to shape the production and flow of information in ways that favor their interests. Thus it is not surprising that states such as China, India, and Russia have started to assert sovereign rights to control some aspect of the production and flow of information within their borders, and that European Union courts have made decisions on the basis of European law that affect global information flows and access.

    As the authors put the matter (p. 6): “the [US] doctrine of internet freedom … is the realization of a broader [US] strategy promoting a particular conception of networked communication that depends on American companies …, supports Western norms …, and promotes Western products.” (I would personally say that it actually supports US norms and US products and services.) As the authors point out, one can ask (p. 11): “If states have a right to control the types of people allowed into their territory (immigration), and how its money is exchanged with foreign banks, then why don’t they have a right to control information flows from foreign actors?”

    To be sure, any such controls would have to comply with international human rights law. But the current US policies go much further, implying that those human rights laws must be implemented in accordance with the US interpretation, meaning few restrictions on freedom of speech, weak protection of privacy, and ever stricter protection for intellectual property. As Powers and Jablonski point out (p. 31), the US does not hesitate to promote restrictions on information flows when that promotes its goals.

    Again, the authors do not make value judgments: they explain in Chapter 1 how the US deliberately attempts to shape (to a large extent successfully) international policies, so that both actions and inactions serve its interests and those of the large corporations that increasingly influence US policies.

    The authors then explain how the US military-industrial complex has morphed into an information-industrial complex, with deleterious consequences for both industry and government, consequences such as “weakened oversight, accountability, and industry vitality and competitiveness”(p. 23) that create risks for society and democracy. As the authors say, the shift “from adversarial to cooperative and laissez-faire rule making is a keystone moment in the rise of the information-industrial complex” (p. 61).

    As a specific example, they focus on Google, showing how it (largely successfully) aims to control and dominate all aspects of the data market, from production, through extraction, refinement, infrastructure and demand. A chapter is devoted to the economics of internet connectivity, showing how US internet policy is basically about getting the largest number of people online, so that US companies can extract ever greater profits from the resulting data flows. They show how the network effects, economies of scale, and externalities that are fundamental features of the internet favor first-movers, which are mostly US companies.

    The remedy to such situations is well known: government intervention: widely accepted regarding air transport, road transport, pharmaceuticals, etc., and yet unthinkable for many regarding the internet. But why? As the authors put the matter (p. 24): “While heavy-handed government controls over the internet should be resisted, so should a system whereby internet connectivity requires the systematic transfer of wealth from the developing world to the developed.” But freedom of information is put forward to justify specific economic practices which would not be easy to justify otherwise, for example “no government taxes companies for data extraction or for data imports/exports, both of which are heavily regulated aspects of markets exchanging other valuable commodities”(p. 97).

    The authors show in detail how the so-called internet multi-stakeholder model of governance is dominated by insiders and used “under the veil of consensus’” (p. 136) to further US policies and corporations. A chapter is devoted to explaining how all states control, at least to some extent, information flows within their territories, and presents detailed studies of how four states (China, Egypt, Iran and the USA) have addressed the challenges of maintaining political control while respecting (or not) freedom of speech. The authors then turn to the very current topic of mass surveillance, and its relation to anonymity, showing how, when the US presents the internet and “freedom to connect” as analogous to public speech and town halls, it is deliberately arguing against anonymity and against privacy – and this of course in order to avoid restrictions on its mass surveillance activities.

    Thus the authors posit that there are tensions between the US call for “internet freedom” and other states’ calls for “information sovereignty”, and analyze the 2012 World Conference on International Telecommunications from that point of view.

    Not surprisingly, the authors conclude that international cooperation, recognizing the legitimate aspirations of all the world’s peoples, is the only proper way forward. As the authors put the matter (p. 206): “Activists and defenders of the original vision of the Web as a ‘fair and humane’ cyber-civilization need to avoid lofty ‘internet freedom’ declarations and instead champion specific reforms required to protect the values and practices they hold dear.” And it is with that in mind, as a counterweight to US and US-based corporate power, that a group of civil society organizations have launched the Internet Social Forum.

    Anybody who is seriously interested in the evolution of internet governance and its impact on society and democracy will enjoy reading this well researched book and its clear exposition of key facts. One can only hope that the Council of Europe will heed Powers and Jablonski’s advice and avoid adopting more resolutions such as the recent recommendation to member states by the EU Committee of Ministers, which merely pander to the US discourse and US power that Powers and Jablonski describe so aptly. And one can fondly hope that this book will help to inspire a change in course that will restore the internet to what it might become (and what many thought it was supposed to be): an engine for democracy and social and economic progress, justice, and equity.
    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • The Internet vs. Democracy

    The Internet vs. Democracy

    Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracya review of Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy  (The New Press, 2014)
    by Richard Hill
    ~
    Many of us have noticed that much of the news we read is the same, no matter which newspaper or web site we consult: they all seem to be recycling the same agency feeds. To understand why this is happening, there are few better analyses than the one developed by media scholar Robert McChesney in his most recent book, Digital Disconnect. McChesney is a Professor in the Department of Communication at the University of Illinois at Urbana-Champaign, specializing in the history and political economy of communications. He is the author or co-author of more than 20 books, among the best-known of which are The Endless Crisis: How Monopoly-Finance Capital Produces Stagnation and Upheaval from the USA to China (with John Bellamy Foster, 2012), The Political Economy of Media: Enduring Issues, Emerging Dilemmas (2008), Communication Revolution: Critical Junctures and the Future of Media (2007), and Rich Media, Poor Democracy: Communication Politics in Dubious Times (1999), and is co-founder of Free Press.

    Many see the internet as a powerful force for improvement of human rights, living conditions, the economy, rights of minorities, etc. And indeed, like many communications technologies, the internet has the potential to facilitate social improvements. But in reality the internet has recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities.

    One might have expected that democracies would have harnessed the internet to serve the interests of their citizens, as they largely did with other technologies such as roads, telegraphy, telephony, air transport, pharmaceuticals (even if they used these to serve only the interests of their own citizens and not the general interests of mankind).

    But this does not appear to be the case with respect to the internet: it is used largely to serve the interests of a few very wealthy individuals, or certain geo-economic and geo-political interests. As McChesney puts the matter: “It is supremely ironic that the internet, the much-ballyhooed champion of increased consumer power and cutthroat competition, has become one of the greatest generators of monopoly in economic history” (131 in the print edition). This trend to use technology to favor special interests, not the general interest, is not unique to the internet. As Josep Ramoneda puts the matter: “We expected that governments would submit markets to democracy and it turns out that what they do is adapt democracy to markets, that is, empty it little by little.”

    McChesney’s book explains why this is the case: despite its great promise and potential to increase democracy, various factors have turned the internet into a force that is actually destructive to democracy, and that favors special interests.

    McChesney reminds us what democracy is, citing Aristotle (53): “Democracy [is] when the indigent, and not the men of property are the rulers. If liberty and equality … are chiefly to be found in democracy, they will be best attained when all persons alike share in the government to the utmost.”

    He also cites US President Lincoln’s 1861 warning against despotism (55): “the effort to place capital on an equal footing with, if not above, labor in the structure of government.” According to McChesney, it was imperative for Lincoln that the wealthy not be permitted to have undue influence over the government.

    Yet what we see today in the internet is concentrated wealth in the form of large private companies that exert increasing influence over public policy matters, going to so far as to call openly for governance systems in which they have equal decision-making rights with the elected representatives of the people. Current internet governance mechanisms are celebrated as paragons of success, whereas in fact they have not been successful in achieving the social promise of the internet. And it has even been said that such systems need not be democratic.

    What sense does it make for the technology that was supposed to facilitate democracy to be governed in ways that are not democratic? It makes business sense, of course, in the sense of maximizing profits for shareholders.

    McChesney explains how profit-maximization in the excessively laissez-faire regime that is commonly called neoliberalism has resulted in increasing concentration of power and wealth, social inequality and, worse, erosion of the press, leading to erosion of democracy. Nowhere is this more clearly seen than in the US, which is the focus of McChesney’s book. Not only has the internet eroded democracy in the US, it is used by the US to further its geo-political goals; and, adding insult to injury, it is promoted as a means of furthering democracy. Of course it could and should do so, but unfortunately it does not, as McChesney explains.

    The book starts by noting the importance of the digital revolution and by summarizing the views of those who see it as an engine of good (the celebrants) versus those who point out its limitations and some of its negative effects (the skeptics). McChesney correctly notes that a proper analysis of the digital revolution must be grounded in political economy. Since the digital revolution is occurring in a capitalist system, it is necessarily conditioned by that system, and it necessarily influences that system.

    A chapter is devoted to explaining how and why capitalism does not equal democracy: on the contrary, capitalism can well erode democracy, the contemporary United States being a good example. To dig deeper into the issues, McChesney approaches the internet from the perspective of the political economy of communication. He shows how the internet has profoundly disrupted traditional media, and how, contrary to the rhetoric, it has reduced competition and choice – because the economies of scale and network effects of the new technologies inevitably favor concentration, to the point of creating natural monopolies (who is number two after Facebook? Or Twitter?).

    The book then documents how the initially non-commercial, publicly-subsidized internet was transformed into an eminently commercial, privately-owned capitalist institution, in the worst sense of “capitalist”: domination by large corporations, monopolistic markets, endless advertising, intense lobbying, and cronyism bordering on corruption.

    Having explained what happened in general, McChesney focuses on what happened to journalism and the media in particular. As we all know, it has been a disaster: nobody has yet found a viable business model for respectable online journalism. As McChesney correctly notes, vibrant journalism is a pre-condition for democracy: how can people make informed choices if they do not have access to valid information? The internet was supposed to broaden our sources of information. Sadly, it has not, for the reasons explained in detail in the book. Yet there is hope: McChesney provides concrete suggestions for how to deal with the issue, drawing on actual experiences in well functioning democracies in Europe.

    The book goes on to call for specific actions that would create a revolution in the digital revolution, bringing it back to its origins: by the people, for the people. McChesney’s proposed actions are consistent with those of certain civil society organizations, and will no doubt be taken up in the forthcoming Internet Social Forum, an initiative whose intent is precisely to revolutionize the digital revolution along the lines outlined by McChesney.

    Anybody who is aware of the many issues threatening the free and open internet, and democracy itself, will find much to reflect upon in Digital Disconnect, not just because of its well-researched and incisive analysis, but also because it provides concrete suggestions for how to address the issues.

    _____

    Richard Hill, an independent consultant based in Geneva, Switzerland, was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He frequently writes about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    a review of Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014)

    by Frank Pasquale

    ~

    Business futurism is a grim discipline. Workers must either adapt to the new economic realities, or be replaced by software. There is a “race between education and technology,” as two of Harvard’s most liberal economists insist. Managers should replace labor with machines that require neither breaks nor sick leave. Superstar talents can win outsize rewards in the new digital economy, as they now enjoy global reach, but they will replace thousands or millions of also-rans. Whatever can be automated, will be, as competitive pressures make fairly paid labor a luxury.

    Thankfully, Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age (2MA)  downplays these zero-sum tropes. Brynjolffson & McAfee (B&M) argue that the question of distribution of the gains from automation is just as important as the competitions for dominance it accelerates. 2MA invites readers to consider how societies will decide what type of bounty from automation they want, and what is wanted first.  The standard, supposedly neutral economic response (“whatever the people demand, via consumer sovereignty”) is unconvincing. As inequality accelerates, the top 5% (of income earners) do 35% of the consumption. The top 1% is responsible for an even more disproportionate share of investment. Its richest members can just as easily decide to accelerate the automation of the wealth defense industry as they can allocate money to robotic construction, transportation, or mining.

    A humane agenda for automation would prioritize innovations that complement (jobs that ought to be) fulfilling vocations, and substitute machines for dangerous or degrading work. Robotic meat-cutters make sense; robot day care is something to be far more cautious about. Most importantly, retarding automation that controls, stigmatizes, and cheats innocent people, or sets up arms races with zero productive gains, should be a much bigger part of public discussions of the role of machines and software in ordering human affairs.

    2MA may set the stage for such a human-centered automation agenda. Its diagnosis of the problem of rapid automation (described in Part I below) is compelling. Its normative principles (II) are eclectic and often humane. But its policy vision (III) is not up to the challenge of channeling and sequencing automation. This review offers an alternative, while acknowledging the prescience and insight of B&M’s work.

    I. Automation’s Discontents

    For B&M, the acceleration of automation ranks with the development of agriculture, or the industrial revolution, as one of the “big stories” of human history (10-12). They offer an account of the “bounty and spread” to come from automation. “Bounty” refers to the increasing “volume, variety, and velocity” of any imaginable service or good, thanks to its digital reproduction or simulation (via, say, 3-D printing or robots). “Spread” is “ever-bigger differences among people in economic success” that they believe to be just as much an “economic consequence” of automation as bounty.[1]

    2MA briskly describes various human workers recently replaced by computers.  The poor souls who once penned corporate earnings reports for newspapers? Some are now replaced by Narrative Science, which seamlessly integrates new data into ready-made templates (35). Concierges should watch out for Siri (65). Forecasters of all kinds (weather, home sales, stock prices) are being shoved aside by the verdicts of “big data” (68). “Quirky,” a startup, raised $90 million by splitting the work of making products between a “crowd” that “votes on submissions, conducts research, suggest improvements, names and brands products, and drives sales” (87), and Quirky itself, which “handles engineering, manufacturing, and distribution.” 3D printing might even disintermediate firms like Quirky (36).

    In short, 2MA presents a kaleidoscope of automation realities and opportunities. B&M skillfully describe the many ways automation both increases the “size of the pie,” economically, and concentrates the resulting bounty among the talented, the lucky, and the ruthless. B&M emphasize that automation is creeping up the value chain, potentially substituting machines for workers paid better than the average.

    What’s missing from the book are the new wave of conflicts that would arise if those at very top of the value chain (or, less charitably, the rent and tribute chain) were to be replaced by robots and algorithms. When BART workers went on strike, Silicon Valley worthies threatened to replace them with robots. But one could just as easily call for the venture capitalists to be replaced with algorithms. Indeed, one venture capital firm added an algorithm to its board in 2013.  Travis Kalanick, the CEO of Uber, responded to a question on driver wage demands by bringing up the prospect of robotic drivers. But given Uber’s multiple legal and PR fails in 2014, a robot would probably would have done a better job running the company than Kalanick.

    That’s not “crazy talk” of communistic visions along the lines of Marx’s “expropriate the expropriators,” or Chile’s failed Cybersyn.[2]  Thiel Fellow and computer programming prodigy Vitaly Bukherin has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.”[3] Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite. To attract and retain the support of investors, a firm must obtain certain results, and the short-run paths to attaining them (such as cutting wages, or financial engineering) are increasingly narrow.  And in today’s investment environment of rampant short-termism, the short is often the only term there is.

    In the long run, a secure firm can tolerate experiments. Little wonder, then, that the largest firm at the cutting edge of automation—Google—has a secure near-monopoly in search advertising in numerous markets. As Peter Thiel points out in his recent From Zero to One, today’s capitalism rewards the best monopolist, not the best competitor. Indeed, even the Department of Justice’s Antitrust Division appeared to agree with Thiel in its 1995 guidelines on antitrust enforcement in innovation markets. It viewed intellectual property as a good monopoly, the rightful reward to innovators for developing a uniquely effective process or product. And its partner in federal antitrust enforcement, the Federal Trade Commission, has been remarkably quiescent in response to emerging data monopolies.

    II. Propertizing Data

    For B&M, intellectual property—or, at least, the returns accruing to intellectual insight or labor—plays a critical role in legitimating inequalities arising out of advanced technologies.  They argue that “in the future, ideas will be the real scarce inputs in the world—scarcer than both labor and capital—and the few who provide good ideas will reap huge rewards.”[4] But many of the leading examples of profitable automation are not “ideas” per se, or even particularly ingenious algorithms. They are brute force feats of pattern recognition: for example, Google’s studying past patterns of clicks to see what search results, and what ads, are personalized to delight and persuade each of its hundreds of millions of users. The critical advantage there is the data, not the skill in working with it.[5] Google will demur, but if they were really confident, they’d license the data to other firms, confident that others couldn’t best their algorithmic prowess.  They don’t, because the data is their critical, self-reinforcing advantage. It is a commonplace in big data literatures to say that the more data one has, the more valuable any piece of it becomes—something Googlers would agree with, as long as antitrust authorities aren’t within earshot.

    As sensors become more powerful and ubiquitous, feats of automated service provision and manufacture become more easily imaginable.  The Baxter robot, for example, merely needs to have a trainer show it how to move in order to ape the trainer’s own job. (One is reminded of the stories of US workers flying to India to train their replacements how to do their job, back in the day when outsourcing was the threat du jour to U.S. living standards.)

    how to train a robot
    How to train a Baxter robot. Image source: Inc. 

    From direct physical interaction with a robot, it is a short step to, say, programmed holographic or data-driven programming.  For example, a surveillance camera on a worker could, after a period of days, months, or years, potentially record every movement or statement of the worker, and replicate it, in response to whatever stimuli led to the prior movements or statements of the worker.

    B&M appear to assume that such data will be owned by the corporations that monitor their own workers.  For example, McDonalds could train a camera on every cook and cashier, then download the contents into robotic replicas. But it’s just as easy to imagine a legal regime where, say, workers’ rights to the data describing their movements would be their property, and firms would need to negotiate to purchase the rights to it.  If dance movements can be copyrighted, so too can the sweeps and wipes of a janitor. Consider, too, that the extraordinary advances in translation accomplished by programs like Google Translate are in part based on translations by humans of United Nations’ documents released into the public domain.[6] Had the translators’ work not been covered by “work-made-for-hire” or similar doctrines, they might well have kept their copyrights, and shared in the bounty now enjoyed by Google.[7]

    Of course, the creativity of translation may be greater than that displayed by a janitor or cashier. Copyright purists might thus reason that the merger doctrine denies copyrightability to the one best way (or small suite of ways) of doing something, since the idea of the movement and its expression cannot be separated. Grant that, and one could still imagine privacy laws giving workers the right to negotiate over how, and how pervasively, they are watched. There are myriad legal regimes governing, in minute detail, how information flows and who has control over it.

    I do not mean to appropriate here Jaron Lanier’s ideas about micropayments, promising as they may be in areas like music or journalism. A CEO could find some critical mass of stockers or cooks or cashiers to mimic even if those at 99% of stores demanded royalties for the work (of) being watched. But the flexibility of legal regimes of credit, control, and compensation is under-recognized. Living in a world where employers can simply record everything their employees do, or Google can simply copy every website that fails to adopt “robots.txt” protection, is not inevitable. Indeed, according to renowned intellectual property scholar Oren Bracha, Google had to “stand copyright on its head” to win that default.[8]

    Thus B&M are wise to acknowledge the contestability of value in the contemporary economy.  For example, they build on the work of MIT economists Daron Acemoglu and David Autor to demonstrate that “skill biased technical change” is a misleading moniker for trends in wage levels.  The “tasks that machines can do better than humans” are not always “low-skill” ones (139). There is a fair amount of play in the joints in the sequencing of automation: sometimes highly skilled workers get replaced before those with a less complex and difficult-to-learn repertoire of abilities.  B&M also show that the bounty predictably achieved via automation could compensate the “losers” (of jobs or other functions in society) in the transition to a more fully computerized society. By seriously considering the possibility of a basic income (232), they evince a moral sensibility light years ahead of the “devil-take-the-hindmost” school of cyberlibertarianism.

    III. Proposals for Reform

    Unfortunately, some of B&M’s other ideas for addressing the possibility of mass unemployment in the wake of automation are less than convincing.  They praise platforms like Lyft for providing new opportunities for work (244), perhaps forgetting that, earlier in the book, they described the imminent arrival of the self-driving car (14-15). Of course, one can imagine decades of tiered driving, where the wealthy get self-driving cars first, and car-less masses turn to the scrambling drivers of Uber and Lyft to catch rides. But such a future seems more likely to end in a deflationary spiral than  sustainable growth and equitable distribution of purchasing power. Like the generation traumatized by the Great Depression, millions subjected to reverse auctions for their labor power, forced to price themselves ever lower to beat back the bids of the technologically unemployed, are not going to be in a mood to spend. Learned helplessness, retrenchment, and miserliness are just as likely a consequence as buoyant “re-skilling” and self-reinvention.

    Thus B&M’s optimism about what they call the “peer economy” of platform-arranged production is unconvincing.  A premier platform of digital labor matching—Amazon’s Mechanical Turk—has occasionally driven down the wage for “human intelligence tasks” to a penny each. Scholars like Trebor Scholz and Miriam Cherry have discussed the sociological and legal implications of platforms that try to disclaim all responsibility for labor law or other regulations. Lilly Irani’s important review of 2MA shows just how corrosive platform capitalism has become. “With workers hidden in the technology, programmers can treat [them] like bits of code and continue to think of themselves as builders, not managers,” she observes in a cutting aside on the self-image of many “maker” enthusiasts.

    The “sharing economy” is a glidepath to precarity, accelerating the same fate for labor in general as “music sharing services” sealed for most musicians. The lived experience of many “TaskRabbits,” which B&M boast about using to make charts for their book, cautions against reliance on disintermediation as a key to opportunity in the new digital economy. Sarah Kessler describes making $1.94 an hour labeling images for a researcher who put the task for bid on Mturk.  The median active TaskRabbit in her neighborhood made $120 a week; Kessler cleared $11 an hour on her best day.

    Resistance is building, and may create fairer terms online.  For example, Irani has helped develop a “Turkopticon” to help Turkers rate and rank employers on the site. Both Scholz and Mike Konczal have proposed worker cooperatives as feasible alternatives to Uber, offering drivers both a fairer share of revenues, and more say in their conditions of work. But for now, the peer economy, as organized by Silicon Valley and start-ups, is not an encouraging alternative to traditional employment. It may, in fact, be worse.

    Therefore, I hope B&M are serious when they say “Wild Ideas [are] Welcomed” (245), and mention the following:

    • Provide vouchers for basic necessities. . . .
    • Create a national mutual fund distributing the ownership of capital widely and perhaps inalienably, providing a dividend stream to all citizens and assuring the capital returns do not become too highly concentrated.
    • Depression-era Civilian Conservation Corps to clean up the environment, build infrastructure.

    Speaking of the non-automatable, we could add the Works Progress Administration (WPA) to the CCC suggestion above.  Revalue the arts properly, and the transition may even add to GDP.

    Soyer, Artists on the WPA
    Moses Soyer, “Artists on WPA” (1935). Image source: Smithsonian American Art Museum

    Unfortunately, B&M distance themselves from the ideas, saying, “we include them not necessarily to endorse them, but instead to spur further thinking about what kinds of interventions will be necessary as machines continue to race ahead” (246).  That is problematic, on at least two levels.

    First, a sophisticated discussion of capital should be at the core of an account of automation,  not its periphery. The authors are right to call for greater investment in education, infrastructure, and basic services, but they need a more sophisticated account of how that is to be arranged in an era when capital is extraordinarily concentrated, its owners have power over the political process, and most show little to no interest in long-term investment in the skills and abilities of the 99%. Even the purchasing power of the vast majority of consumers is of little import to those who can live off lightly taxed capital gains.

    Second, assuming that “machines continue to race ahead” is a dodge, a refusal to name the responsible parties running the machines.  Someone is designing and purchasing algorithms and robots. Illah Reza Nourbaksh’s Robot Futures suggests another metaphor:

    Today most nonspecialists have little say in charting the role that robots will play in our lives.  We are simply watching a new version of Star Wars scripted by research and business interests in real time, except that this script will become our actual world. . . . Familiar devices will become more aware, more interactive and more proactive; and entirely new robot creatures will share our spaces, public and private, physical and digital. . . .Eventually, we will need to read what they write, we will have to interact with them to conduct our business transactions, and we will often mediate our friendships through them.  We will even compete with them in sports, at jobs, and in business. [9]

    Nourbaksh nudges us closer to the truth, focusing on the competitive angle. But the “we” he describes is also inaccurate. There is a group that will never have to “compete” with robots at jobs or in business—rentiers. Too many of them are narrowly focused on how quickly they can replace needy workers with undemanding machines.

    For the rest of us, another question concerning automation is more appropriate: how much can we be stuck with? A black-card-toting bigshot will get the white glove treatment from AmEx; the rest are shunted into automated phone trees. An algorithm determines the shifts of retail and restaurant workers, oblivious to their needs for rest, a living wage, or time with their families.  Automated security guards, police, and prison guards are on the horizon. And for many of the “expelled,” the homines sacres, automation is a matter of life and death: drone technology can keep small planes on their tracks for hours, days, months—as long as it takes to execute orders.

    B&M focus on “brilliant technologies,” rather than the brutal or bumbling instances of automation.  It is fun to imagine a souped-up Roomba making the drudgery of housecleaning a thing of the past.  But domestic robots have been around since 2000, and the median wage-earner in the U.S. does not appear to be on a fast track to a Jetsons-style life of ease.[10] They are just as likely to be targeted by the algorithms of the everyday, as they are to be helped by them. Mysterious scoring systems routinely stigmatize persons, without them even knowing. They reflect the dark side of automation—and we are in the dark about them, given the protections that trade secrecy law affords their developers.

    IV. Conclusion

    Debates about robots and the workers “struggling to keep up” with them are becoming stereotyped and stale. There is the standard economic narrative of “skill-biased technical change,” which acts more as a tautological, post hoc, retrodictive, just-so story than a coherent explanation of how wages are actually shifting. There is cyberlibertarian cornucopianism, as Google’s Ray Kurzweil and Eric Schmidt promise there is nothing to fear from an automated future. There is dystopianism, whether intended as a self-preventing prophecy, or entertainment. Each side tends to talk past the other, taking for granted assumptions and values that its putative interlocutors reject out of hand.

    Set amidst this grim field, 2MA is a clear advance. B&M are attuned to possibilities for the near and far future, and write about each in accessible and insightful ways.  The authors of The Second Machine Age claim even more for it, billing it as a guide to epochal change in our economy. But it is better understood as the kind of “big idea” book that can name a social problem, underscore its magnitude, and still dodge the elaboration of solutions controversial enough to scare off celebrity blurbers.

    One of 2MA’s blurbers, Clayton Christensen, offers a backhanded compliment that exposes the core weakness of the book. “[L]earners and teachers alike are in a perpetual mode of catching up with what is possible. [The Second Machine Age] frames a future that is genuinely exciting!” gushes Christensen, eager to fold automation into his grand theory of disruption. Such a future may be exciting for someone like Christensen, a millionaire many times over who won’t lack for food, medical care, or housing if his forays fail. But most people do not want to be in “perpetually catching up” mode. They want secure and stable employment, a roof over their heads, decent health care and schooling, and some other accoutrements of middle class life. Meaning is found outside the economic sphere.

    Automation could help stabilize and cheapen the supply of necessities, giving more persons the time and space to enjoy pursuits of their own choosing. Or it could accelerate arms races of various kinds: for money, political power, armaments, spying, stock trading. As long as purchasing power alone—whether of persons or corporations—drives the scope and pace of automation, there is little hope that the “brilliant technologies” B&M describe will reliably lighten burdens that the average person experiences. They may just as easily entrench already great divides.

    All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers.  That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners.

    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His recent book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    [1] One can quibble with the idea of automation as necessarily entailing “bounty”—as Yves Smith has repeatedly demonstrated, computer systems can just as easily “crapify” a process once managed well by humans. Nor is “spread” a necessary consequence of automation; well-distributed tools could well counteract it. It is merely a predictable consequence, given current finance and business norms and laws.

    [2] For a definition of “crazy talk,” see Neil Postman, Stupid Talk, Crazy Talk: How We Defeat Ourselves by the Way We Talk and What to Do About It (Delacorte, 1976). For Postman, “stupid talk” can be corrected via facts, whereas “crazy talk” “establishes different purposes and functions than the ones we normally expect.” If we accept the premise of labor as a cost to be minimized, what better to cut than the compensation of the highest paid persons?

    [3] Conversation with Sam Frank at the Swiss Institute, Dec. 16, 2014, sponsored by Triple Canopy.

    [4] In Brynjolfsson, McAfee, and Michael Spence, “New World Order: Labor, Capital, and Ideas in the Power Law Economy,” an article promoting the book. Unfortunately, as with most statements in this vein, B&M&S give us little idea how to identify a “good idea” other than one that “reap[s] huge rewards”—a tautology all too common in economic and business writing.

    [5] Frank Pasquale, The Black Box Society (Harvard University Press, 2015).

    [6] Programs, both in the sense of particular software regimes, and the program of human and technical efforts to collect and analyze the translations that were the critical data enabling the writing of the software programs behind Google Translate.

    [9] Illah Reza Nourbaksh, Robot Futures (MIT Press, 2013), pp. xix-xx.

    [10] Erwin Prassler and Kazuhiro Kosuge, “Domestic Robotics,” in Bruno Siciliano and Oussama Khatib, eds., Springer Handbook of Robotics (Springer, 2008), p. 1258.

  • Warding Off General Ludd: The Absurdity of “The Luddite Awards”

    Warding Off General Ludd: The Absurdity of “The Luddite Awards”

    By Zachary Loeb
    ~

    Of all the dangers looming over humanity no threat is greater than that posed by the Luddites.

    If the previous sentence seems absurdly hyperbolic, know that it only seems that way because it is, in fact, quite ludicrous. It has been over two hundred years since the historic Luddites rose up against “machinery hurtful to commonality,” but as their leader the myth enrobed General Ludd was never apprehended there are always those who fear that General Ludd is still out there, waiting with sledge hammer at the ready. True, there have been some activist attempts to revive the spirit of the Luddites (such as the neo-Luddites of the late 1980s and 1990s) – but in the midst of a society enthralled by (and in thrall to) smart phones, start-ups, and large tech companies – to see Luddites lurking in every shadow is a sign of either ideology, paranoia, or both.

    Yet, such an amusing mixture of unabashed pro-technology ideology and anxiety at the possibility of any criticism of technology is on full display in the inaugural “Luddite Awards” presented by The Information Technology and Innovation Foundation (ITIF). Whereas the historic Luddites needed sturdy hammers, and other such implements, to engage in machine breaking the ITIF seems to believe that the technology of today is much more fragile – it can be smashed into nothingness simply by criticism or even skepticism. As their name suggests, the ITIF is a think tank committed to the celebration of, and advocating for, technological innovation in its many forms. Thus it should not be surprising that a group committed to technological innovation would be wary of what it perceives as a growing chorus of “neo-Ludditism” that it imagines is planning to pull the plug on innovation. Therefore the ITIF has seen fit to present dishonorable “Luddite Awards” to groups it has deemed insufficiently enamored with innovation, these groups include (amongst others): The Vermont Legislature, The French Government, the organization Free Press, the National Rifle Association, and the Electronic Frontier Foundation. The ITIF “Luddite Awards” may mark the first time that any group has accused the Electronic Frontier Foundation of being a secret harbor for neo-Ludditism.

    luddite
    Unknown artist, “The Leader of the Luddites,” engraving, 1812 (image source: Wikipedia)

    The full report on “The 2014 ITIF Luddite Awards,” written by the ITIF’s president Robert D. Atkinson, presents the current state of technological innovation as being dangerously precarious. Though technological innovation is currently supplying people with all manner of devices, the ITIF warns against a growing movement born of neo-Ludditism that will aim to put a stop to further innovation. Today’s neo-Ludditism, in the estimation of the ITIF is distinct from the historic Luddites, and yet the goal of “ideological Ludditism” is still “to ‘smash’ today’s technology.” Granted, adherents of neo-Ludditism are not raiding factories with hammers, instead they are to be found teaching at universities, writing columns in major newspapers, disparaging technology in the media, and otherwise attempting to block the forward movement of progress. According to the ITIF (note the word “all”):

    “what is behind all ideological Ludditism is the general longing for a simpler life from the past—a life with fewer electronics, chemicals, molecules, machines, etc.” (ITIF, 3)

    Though the chorus of Ludditisim has, in the ITIF’s reckoning, grown to an unacceptable volume of late, the foundation is quick to emphasize that Ludditism is nothing new. What is new, as the ITIF puts it, is that these nefarious Luddite views have, apparently, moved from the margins and infected the larger public discourse around technology. A diverse array of figures and groups from figures like environmentalist Bill McKibben, conservative thinker James Pethokoukis, economist Paul Krugman, writers for Smithsonian Magazine, to foundations like Free Press, the EFF and the NRA – are all tarred with the epithet “Luddite.”The neo-Luddites, according to ITIF, issue warnings against unmitigated acceptance of innovation when they bring up environmental concerns, mention the possibility of jobs being displaced by technology, write somewhat approvingly of the historic Luddites, or advocate for Net Neutrality.

    While the ITIF holds to the popular, if historically inaccurate, definition of Luddite as “one who resists technological change,” their awards make clear that the ITIF would like to add to this definition the words “or even mildly opposes any technological innovation.” The ten groups awarded “Luddite Awards” are a mixture of non-profit public advocacy organizations and various governments – though the ITIF report seems to revel in attacking Bill McKibben he was not deemed worthy of an award (maybe next year). The awardees include: the NRA for opposing smart guns, The Vermont legislature for requiring the labeling of GMOS, Free Press’s support of net neutrality which is deemed as an affront to “smarter broadband networks,” news reports which “claim that ‘robots are killing jobs,” the EFF is cited as it “opposes Health IT,” and various governments in several states are reprimanded for “cracking down” on companies like Airbnb, Uber and Lyft. The ten recipients of Luddite awards may be quite surprised to find that they have been deemed adherents of neo-Ludditism, but in the view of the ITIF the actions these groups have taken indicate that General Ludd is slyly guiding their moves. Though the Luddite Awards may have a somewhat silly feeling, the ITIF cautions that the threat is serious, as the report ominously concludes:

    “But while we can’t stop the Luddites from engaging in their anti-progress, anti-innovation activities, we can recognize them for what they are: actions and ideas that are profoundly anti-progress, that if followed would mean a our children [sic] will live lives as adults nowhere near as good as the lives they could live if we instead embraced, rather than fought innovation.” (ITIF, 19)

    Credit is due to the ITIF for their ideological consistency. In putting together their list of recipients for the inaugural “Luddite Awards” – the foundation demonstrates that they are fully committed to technological innovation and they are unflagging in their support of that cause. Nevertheless, while the awards (and in particular the report accompanying the awards) may be internally ideologically consistent it is also a work of dubious historical scholarship, comical neoliberal paranoia, and evinces a profound anti-democratic tendency. Though the ITIF awards aim to target what it perceives as “neo-Ludditism” even a cursory glance at their awardees makes it abundantly clear that what the organization actually opposes is any attempt to regulate technology undertaken by a government, or advocated for by a public interest group. Even in a country as regulation averse as the contemporary United States it is still safer to defame Luddites than to simply state that you reject regulation. The ITIF carefully cloaks its ideology in the aura of terms with positive connotations such as “innovation,” “progress,” and “freedom” but these terms are only so much fresh paint over the same “free market” ideology that only values innovation, progress and freedom when they are in the service of neoliberal economic policies. Nowhere does the ITIF engage seriously with the questions of “who profits from this innovation?” “who benefits from this progress?” “is this ‘freedom’ equally distributed or does it reinforce existing inequities?” – the terms are used as ideological sledgehammers far blunter than any tool the Luddites ever used. This raw ideology is on perfect display in the very opening line of the award announcement, which reads:

    “Technological innovation is the wellspring of human progress, bringing higher standards of living, improved health, a cleaner environment, increased access to information and many other benefits.” (ITIF, 1)

    One can only applaud the ITIF for so clearly laying out their ideology at the outset, and one can only raise a skeptical eyebrow at this obvious case of the logical fallacy of Begging the Question. To claim that “technological innovation is the wellspring of human progress” is an assumption that demands proof, it is not a conclusion in and of itself. While arguments can certainly be made to support this assumption there is little in the ITIF report that suggests the ITIF is willing to engage in the type of critical reflection, which would be necessary for successfully supporting this argument (though, to be fair, the ITIF has published many other reports some of which may better lay out this claim). The further conclusions that such innovation brings “higher standards of living, improved health, a cleaner environment” and so forth are further assumptions that require proof – and in the process of demonstrating this proof one is forced (if engaging in honest argumentation) to recognize the validity of competing claims. Particularly as many of the “benefits” ITIF seeks to celebrate do not accrue evenly. True, an argument can be made that technological innovation has an important role to play in ushering in a “cleaner environment” – but tell that to somebody who lives next to an e-waste dump where mountains of the now obsolete detritus of “technological innovation” leach toxins into the soil. The ITIF report is filled with such pleasant sounding “common sense” technological assumptions that have been, at the very least, rendered highly problematic by serious works of inquiry and scholarship in the field of the history of technology. As classic works in the scholarly literature of the Science and Technology Studies field, such as Ruth Schwartz Cowan’s More Work for Mother, make clear “technological innovation” does not always live up to its claims. Granted, it is easy to imagine that the ITIF would offer a retort that simply dismisses all such scholarship as tainted by neo-Ludditism. Yet recognizing that not all “innovation” is a pure blessing does not represent a rejection of “innovation” as such – it just recognize that “innovation” is only one amongst many competing values a society must try to balance.

    Instead of engaging with critics of “technological innovation” in good faith, the ITIF jumps from one logical fallacy to another, trading circular reasoning for attacking the advocate. The author of the ITIF report seems to delight in pillorying Bill McKibben but also aims its barbs at scholars like David Noble and Neil Postman for exposing impressionable college aged minds to their “neo-Luddite” biases. That the ITIF seems unconcerned with business schools, start-up culture, and a “culture industry” that inculcates an adoration for “technological innovation” to the same “impressionable minds” is, obviously, not commented upon. However, if a foundation is attempting to argue that universities are currently a hotbed of “neo-Ludditism” than it is questionable why the ITIF should choose to signal out two professors for special invective who are both deceased – Postman died in 2003 and David Noble died in 2010.

    It almost seems as if the ITIF report cites serious humanistic critics of “technological innovation” as a way to make it seem as though it has actually wrestled with the thought of such individuals. After all, the ITIF report deigns to mention two of the most prominent thinkers in the theoretical legacy of the critique of technology, Lewis Mumford and Jacques Ellul, but it only mentions them in order to dismiss them out of hand. The irony, naturally, is that thinkers like Mumford and Ellul (to say nothing of Postman and Noble) would have not been surprised in the least by the ITIF report as their critiques of technology also included a recognition of the ways that the dominant forces in technological society (be it in the form of Ellul’s “Technique” or Mumford’s “megamachine”) depended upon the ideological fealty of those who saw their own best interests as aligning with that of the new technological regimes of power. Indeed, the ideological celebrants of technology have become a sort of new priesthood for the religion of technology, though as Mumford quipped in Art and Technics:

    “If you fall in love with a machine there is something wrong with your love-life. If you worship a machine there is something wrong with your religion.” (Art and Technics, 81)

    Trade out the word “machine” in the above quotation with “technological innovation” and it applies perfectly to the ITIF awards document. And yet, playful gibes aside, there are many more (many, many more) barbs that one can imagine Mumford directing at the ITIF. As Mumford wrote in The Pentagon of Power:

    “Consistently the agents of the megamachine act as if their only responsibility were to the power system itself. The interests and demands of the populations subjected to the megamachine are not only unheeded but deliberately flouted.” (The Pentagon of Power, 271)

    The ITIF “Luddite Awards” are a pure demonstration of this deliberate flouting of “the interests and demands of the populations” who find themselves always on the receiving end of “technological innovation.” For the ITIF report shows an almost startling disregard for the concerns of “everyday people” and though the ITIF is a proudly nonpartisan organization the report demonstrates a disturbingly anti-democratic tendency. That the group does not lean heavily toward Democrats or Republicans only demonstrates the degree to which both parties eat from the same neoliberal trough – routinely filled with fresh ideological slop by think tanks like ITIF. Groups that advocate in the interest of their supporters in the public sphere (such as Free Press, the EFF, and the NRA {yes, even them}) are treated as interlopers worthy of mockery for having the audacity to raise concerns; similarly elected governmental bodies are berated for daring to pass timid regulations. The shape of the “ideal society” that one detects in the ITIF report is one wherein “technological innovation” knows no limits, and encounters no opposition, even if these limits are relatively weak regulations or simply citizens daring to voice a contrary opinion – consequences be damned! On the high-speed societal train of “technological innovation” the ITIF confuses a few groups asking for a slight reduction of speed with groups threatening to derail the train.

    Thus the key problem of the ITIF “Luddite Awards” emerges – and it is not simply that the ITIF continues to use Luddite as an epithet – it is that the ITIF seems willfully ignorant of any ethical imperatives other than a broadly defined love of “technological innovation.” In handing out “Luddite Awards” the ITIF reveals that it recognizes “technological innovation” as the crowning example of “the good.” It is not simply one “good” amongst many that must carefully compromise with other values (such as privacy, environmental concerns, labor issues, and so forth), rather it is the definitive and ultimate case of “the good.” This is not to claim that “technological innovation” is not amongst values that represent “the good,” but it is not the only value – treating it as such lead to confusing (to borrow a formulation from Lewis Mumford) “the goods life with the good life.” By fully privileging “technological innovation” the ITIF treats other values and ethical claims as if they are to be discarded – the philosopher Hans Jonas’s The Imperative of Responsibility (which advocated for a cautious approach to technological innovation that emphasized the potential risks inherent in new technologies) is therefore tossed out the window to be replaced by “the imperative of innovation” along with a stack of business books and perhaps an Ayn Rand novel, or two, for good measure.

    Indeed, responsibility for the negative impacts of innovation is shrugged off in the ITIF awards, even as many of the awardees (such as the various governments) wrestle with the responsibility that tech companies seem to so happily flaunt. The disrupters hate being disrupted. Furthermore, as should come as no surprise, the ITIF report maintains an aura that smells strongly of colonialism and disregard for the difficulties faced by those who are “disrupted” by “technological innovation.” The ITIF may want to reprimand organizations for trying to gently slow (which is not the same as stopping) certain forms of “technological innovation,” but the report has nothing to say about those who work mining the coltan that powers so many innovative devices, no concern for the factory workers who assemble these devices, and – of course – nothing to say about e-waste. Evidently to think such things are worthy of concern, to even raise the issue of consequences, is a sign of Ludditism. The ITIF holds out the promise of “better days ahead” and shows no concern for those whose lives must be trampled upon in the process. Granted, it is easy to ignore such issues when you work for a think tank in Washington DC and not as a coltan miner, a device assembler, a resident near an e-waste dump, or an individual whose job has just been automated.

    The ITIF “Luddite Awards” are yet another installment of the tech world/business press game of “Who’s Afraid of General Ludd” in which the group shouting the word “Luddite” at all opponents reveals that it has a less nuanced understanding of technology than was had by the historic Luddites. After all, the Luddites were not opposed to technology as such, nor were they opposed to “technological innovation,” rather, as E.P. Thompson describes in The Making of the English Working Class:

    “What was at issue was the ‘freedom’ of the capitalist to destroy the customs of the trade, whether by new machinery, by the factory-system, or by unrestricted competition, beating-down wages, undercutting his rivals, and undermining standards of craftsmanship…They saw laissez faire, not as freedom but as ‘foul Imposition”. They could see no ‘natural law’ by which one man, or a few men, could engage in practices which brought manifest injury to their fellows.” (Thompson, 548)

    What is at issue in the “Luddite Awards” is the “freedom” of “technological innovators” (the same-old “capitalists”) to force their priorities upon everybody else – and while the ITIF may want to applaud such “freedom” it is clear that they do not intend to extend such freedom to the rest of the population. The fear that can be detected in the ITIF “Luddite Awards” is not ultimately directed at the award recipients, but at an aspect of the historic Luddites that the report seems keen on forgetting: namely, that the Luddites organized a mass movement that enjoyed incredible popular support – which was why it was ultimately the military (not “seeing the light” of “technological innovation”) that was required to bring the Luddite uprisings to a halt. While it is questionable whether many of the recipients of “Luddite Awards” will view the award as an honor, the term “Luddite” can only be seen as a fantastic compliment when it is used as a synonym for a person (or group) that dares to be concerned with ethical and democratic values other than a simple fanatical allegiance to “technological innovation.” Indeed, what the ITIF “Luddite Awards” demonstrate is the continuing veracity of the philosopher Günther Anders statement, in the second volume of The Obsolescence of Man, that:

    “In this situation, it is no use to brandish scornful words like ‘Luddites’. If there is anything that deserves scorn it is, to the contrary, today’s scornful use of the term, ‘Luddite’ since this scorn…is currently more obsolete than the allegedly obsolete Luddism.” (Anders, Introduction – Section 7)

    After all, as Anders might have reminded the people at ITIF: gas chambers, depleted uranium shells, and nuclear weapons are also “technological innovations.”

    Works Cited

    • Anders, Günther. The Obsolescence of Man: Volume II – On the Destruction of Life in the Epoch of the Third Industrial Revolution. (translated by Josep Monter Pérez, Pre-Textos, Valencia, 2011). Available online: here.
    • Atkinson, Robert D. The 2014 Luddite Awards. January 2015.
    • Mumford, Lewis. The Myth of the Machine, volume 2 – The Pentagon of Power. New York: Harvest/Harcourt Brace Jovanovich, 1970.
    • Mumford, Lewis. Art and Technics. New York: Columbia University Press, 2000.
    • Thompson, E.P. The Making of the English Working Class. New York: Vintage Books, 1966.
    • Not cited but worth a look – Eric Hobsbawm’s classic article “The Machine Breakers.”


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian,” Loeb writes at the blog LibrarianShipwreck, where this post first appeared. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay