boundary 2

Tag: google

  • Richard Hill – Too Big to Be (Review of Wu, The Curse of Bigness: Antitrust in the New Gilded Age)

    Richard Hill – Too Big to Be (Review of Wu, The Curse of Bigness: Antitrust in the New Gilded Age)

    a review of Timothy Wu, The Curse of Bigness: Antitrust in the New Gilded Age (Random House/Columbia Global Reports, 2018)

    by Richard Hill

    ~

    Tim Wu’s brilliant new book analyses in detail one specific aspect and cause of the dominance of big companies in general and big tech companies in particular: the current unwillingness to modernize antitrust law to deal with concentration in the provision of key Internet services. Wu is a professor at Columbia Law School, and a contributing opinion writer for the New York Times. He is best known for his work on Net Neutrality theory. He is author of the books The Master Switch and The Attention Merchants, along with Network Neutrality, Broadband Discrimination, and other works. In 2013 he was named one of America’s 100 Most Influential Lawyers, and in 2017 he was named to the American Academy of Arts and Sciences.

    What are the consequences of allowing unrestricted growth of concentrated private power, and abandoning most curbs on anticompetitive conduct? As Wu masterfully reminds us:

    We have managed to recreate both the economics and politics of a century ago – the first Gilded Age – and remain in grave danger of repeating more of the signature errors of the twentieth century. As that era has taught us, extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership. Yet, as if blind to the greatest lessons of the last century, we are going down the same path. If we learned one thing from the Gilded Age, it should have been this: The road to fascism and dictatorship is paved with failures of economic policy to serve the needs of the general public. (14)

    While increasing concentration, and its negative effects on social equity, is a general phenomenon, it is particularly concerning for what regards the Internet: “Most visible in our daily lives is the great power of the tech platforms, especially Google, Facebook, and Amazon, who have gained extraordinary power over our lives. With this centralization of private power has come a renewed concentration of wealth, and a wide gap between the rich and poor” (15). These trends have very real political effects: “The concentration of wealth and power has helped transform and radicalize electoral politics. As in the Gilded Age, a disaffected and declining middle class has come to support radically anti-corporate and nationalist candidates, catering to a discontent that transcends party lines” (15). “What we must realize is that, once again, we face what Louis Brandeis called the ‘Curse of Bigness,’ which, as he warned, represents a profound threat to democracy itself. What else can one say about a time when we simply accept that industry will have far greater influence over elections and lawmaking than mere citizens?” (15). And, I would add, what have we come to when some advocate that corporations should have veto power over public policies that affect all of us?

    Surely it is, or should be, obvious that current extreme levels of concentration are not compatible with the premises of social and economic equity, free competition, or democracy. And that “the classic antidote to bigness – the antitrust and other antimonopoly laws – might be recovered and updated to face the challenges of our times” (16). Those who doubt these propositions should read Wu’s book carefully, because he shows that they are true. My only suggestion for improvement would be to add a more detailed explanation of how network effects interact with economies of scale to favour concentration in the ICT industry in general, and in telecommunications and the Internet in particular. But this topic is well explained in other works.

    As Wu points out, antitrust law must not be restricted (as it is at present in the USA) “to deal with one very narrow type of harm: higher prices to consumers” (17). On the contrary, “It needs better tools to assess new forms of market power, to assess macroeconomic arguments, and to take seriously the link between industrial concentration and political influence” (18). The same has been said by other scholars (e.g. here, here, here and here), by a newspaper, an advocacy group, a commission of the European Parliament, a group of European industries, a well-known academic, and even by a plutocrat who benefitted from the current regime.

    Do we have a choice? Can we continue to pretend that we don’t need to adapt antitrust law to rein in the excessive power of the Internet giants? No: “The alternative is not appealing. Over the twentieth century, nations that failed to control private power and attend to the economic needs of their citizens faced the rise of strongmen who promised their citizens a more immediate deliverance from economic woes” (18). (I would argue that any resemblance to the election of US President Trump, to the British vote to leave the European Union, and to the rise of so-called populist parties in several European countries [e.g. Hungary, Italy, Poland, Sweden] is not coincidental).

    Chapter One of Wu’s book, “The Monopolization Movement,” provides historical background, reminding us that from the late nineteenth through the early twentieth century, dominant, sector-specific monopolies emerged and were thought to be an appropriate way to structure economic activity. In the USA, in the early decades of the twentieth century, under the Trust Movement, essentially every area of major industrial activity was controlled or influenced by a single man (but not the same man for each area), e.g. Rockefeller and Morgan. “In the same way that Silicon Valley’s Peter Thiel today argues that monopoly ‘drives progress’ and that ‘competition is for losers,’ adherents to the Trust Movement thought Adam Smith’s fierce competition had no place in a modern, industrialized economy” (26). This system rapidly proved to be dysfunctional: “There was a new divide between the giant corporation and its workers, leading to strikes, violence, and a constant threat of class warfare” (30). Popular resistance mobilized in both Europe and the USA, and it led to the adoption of the first antitrust laws.

    Chapter Two, “The Right to Live, and Not Merely to Exist,” reminds us that US Supreme Court Justice Louis Brandeis “really cared about … the economic conditions under which life is lived, and the effects of the economy on one’s character and on the nation’s soul” (33). The chapter outlines Brandeis’ career and what motivated him to combat monopolies.

    In Chapter Three, “The Trustbuster,” Wu explains how the 1901 assassination of US President McKinley, a devout supporter of unrestricted laissez-faire capitalism (“let well enough alone”, reminiscent of today’s calls for government to “do not harm” through regulation, and to “don’t fix it if it isn’t broken”), resulted in a fundamental change in US economic policy, when Theodore Roosevelt succeeded him. Roosevelt’s “determination that the public was ruler over the corporation, and not vice versa, would make him the single most important advocate of a political antitrust law.” (47). He took on the great US monopolists of the time by enforcing the antitrust laws. “To Roosevelt, economic policy did not form an exception to popular rule, and he viewed the seizure of economic policy by Wall Street and trust management as a serious corruption of the democratic system. He also understood, as we should today, that ignoring economic misery and refusing to give the public what they wanted would drive a demand for more extreme solutions, like Marxist or anarchist revolution” (49). Subsequent US presidents and authorities continued to be “trust busters”, through the 1990s. At the time, it was understood that antitrust was not just an economic issue, but also a political issue: “power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy” (54, citing Justice William Douglas). As we all know, “Increased industrial concentration predictably yields increased influence over political outcomes for corporations and business interests, as opposed to citizens or the public” (55). Wu goes on to explain why and how concentration exacerbates the influence of private companies on public policies and undermines democracy (that is, the rule of the people, by the people, for the people). And he outlines why and how Standard Oil was broken up (as opposed to becoming a government-regulated monopoly). The chapter then explains why very large companies might experience disecomonies of scale, that is, reduced efficiency. So very large companies compensate for their inefficiency by developing and exploiting “a different kind of advantages having less to do with efficiencies of operation, and more to do with its ability to wield economic and political power, by itself or conjunction with others. In other words, a firm may not actually become more efficient as it gets larger, but may become better at raising prices or keeping out competitors” (71). Wu explains how this is done in practice. The rest of this chapter summarizes the impact of the US presidential election of 1912 on US antitrust actions.

    Chapter Four, “Peak Antitrust and the Chicago School,” explains how, during the decades after World War II, strong antitrust laws were viewed as an essential component of democracy; and how the European Community (which later became the European Union) adopted antitrust laws modelled on those of the USA. However, in the mid-1960s, scholars at the University of Chicago (in particular Robert Bork) developed the theory that antitrust measures were meant only to protect consumer welfare, and thus no antitrust actions could be taken unless there was evidence that consumers were being harmed, that is, that a dominant company was raising prices. Harm to competitors or suppliers was no longer sufficient for antitrust enforcement. As Wu shows, this “was really laissez-faire reincarnated.”

    Chapter Five, “The Last of the Big Cases,” discusses two of the last really large US antitrust case. The first was breakup of the regulated de facto telephone monopoly, AT&T, which was initiated in 1974. The second was the case against Microsoft, which started in 1998 and ended in 2001 with a settlement that many consider to be a negative turning point in US antitrust enforcement. (A third big case, the 1969-1982 case against IBM, is discussed in Chapter Six.)

    Chapter Six, “Chicago Triumphant,” documents how the US Supreme Court adopted Bork’s “consumer welfare” theory of antitrust, leading to weak enforcement. As a consequence, “In the United States, there have been no trustbusting or ‘big cases’ for nearly twenty years: no cases targeting an industry-spanning monopolist or super-monopolist, seeking the goal of breakup” (110). Thus, “In a run that lasted some two decades, American industry reached levels of industry concentration arguably unseen since the original Trust era. A full 75 percent of industries witnessed increased concentration from the years 1997 to 2012” (115). Wu gives concrete examples: the old AT&T monopoly, which had been broken up, has reconstituted itself; there are only three large US airlines; there are three regional monopolies for cable TV; etc. But the greatest failure “was surely that which allowed the almost entirely uninhibited consolidation of the tech industry into a new class of monopolists” (118).

    Chapter Seven, “The Rise of the Tech Trusts,” explains how the Internet morphed from a very competitive environment into one dominated by large companies that buy up any threatening competitor. “When a dominant firm buys a nascent challenger, alarm bells are supposed to ring. Yet both American and European regulators found themselves unable to find anything wrong with the takeover [of Instagram by Facebook]” (122).

    The Conclusion, “A Neo-Brandeisian Agenda,” outlines Wu’s thoughts on how to address current issues regarding dominant market power. These include renewing the well known practice of reviewing mergers; opening up the merger review process to public comment; renewing the practice of bringing major antitrust actions against the biggest companies; breaking up the biggest monopolies, adopting the market investigation law and practices of the United Kingdom; recognizing that the goal of antitrust is not just to protect consumers against high prices, but also to protect competition per se, that is to protect competitors, suppliers, and democracy itself. “By providing checks on monopoly and limiting private concentration of economic power, the antitrust law can maintain and support a different economic structure than the one we have now. It can give humans a fighting chance against corporations, and free the political process from invisible government. But to turn the ship, as the leaders of the Progressive era did, will require an acute sensitivity to the dangers of the current path, the growing threats to the Constitutional order, and the potential of rebuilding a nation that actually lives up to its greatest ideals” (139).

    In other words, something is rotten in the state of the Internet: it has “collection and exploitation of personal data”; it has “recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities”; it has led to “erosion of the press, leading to erosion of democracy.” These developments are due to the fact that “US policies that ostensibly promote the free flow of information around the world, the right of all people to connect to the Internet, and free speech, are in reality policies that have, by design, furthered the geo-economic and geo-political goals of the US, including its military goals, its imperialist tendencies, and the interests of large private companies”; and to the fact that “vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.”

    Wu’s call for action is not just opportune, but necessary and important; at the same time, it is not sufficient.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Rob Hunter — The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory

    Rob Hunter — The Digital Turn and the Ethical Turn: Depoliticization in Digital Practice and Political Theory

    Rob Hunter [*]

    Introduction

    In official, commercial, and activist discourses, networked computing is frequently heralded for establishing a field of inclusive, participatory political activity. It is taken to be the latest iteration of, or a standard-bearer for, “technology”: an autonomous force penetrating the social world, an independent variable whose magnitude may not directly be modified and whose effects are or ought to be welcomed. The internet, its component techniques and infrastructures, and related modalities of computing are often supposed to be accelerating and multiplying various aspects of the ideological lynchpin of the neoliberal order: individual sovereignty.[1] The Internet is heralded as the dawn of a new communication age, one in which democracy is to be reinvigorated and expanded through the publicity and interconnectivity made possible by new forms of networked relations among informed consumers.

    Composed of consumer choice, intersubjective rationality, and the activity of the autonomous subject, such sovereignty also forms the basis of many strands of contemporary ethical thought—which has increasingly come to displace rival conceptions of political thought in sectors of the Anglophone academy. In this essay, I focus on two turns and their parallels—the turn to the digital in commerce, politics, and society; and the turn to the ethical in professional and elite thought about how such domains should be ordered. I approach the digital turn through the case of the free and open source software movements. These movements are concerned with sustaining a publicly-available information commons through certain technical and juridical approaches to software development and deployment. The community of free, libre, and open source (FLOSS) developers and maintainers is one of the more consequential spaces in which actors frequently endorse the claim that the digital turn precipitates an unleashing of democratic potential in the form of improved deliberation, equalized access to information, networks, and institutions, and a leveling of hierarchies of authority. I approach the ethical turn through an examination of the political theory of democracy, particularly as it has developed in the work of theorists of deliberative democracy like Jürgen Habermas and John Rawls.

    By FLOSS I refer, more or less interchangeably, to software that is licensed such that it may be freely used, modified, and distributed, and whose source code is similarly available so that it may be inspected or changed by anyone (Free Software Foundation 2018). (It stands in contradistinction to “closed source” or proprietary software that is typically produced and sold by large commercial firms.) The agglomeration of “free,” “libre,” and “open source” reflects the multiple ideological geneses of non-proprietary software. Briefly, “free” or “libre” software is so named because, following Stallman’s (2015) original injunction in 1985, the conditions of its distribution forbid rendering the code (or derivative code) proprietary for the sake of maximizing the freedom of downstream coders and users to do as they see fit with it. The signifier “free” primarily connotes the absence of restrictions on use, modification, and distribution, rather than considerations of cost or exchange value. Of crucial importance to the free software movement was the adoption of “copyleft” licensure of software, in which copies of software are freely distributed with the restriction that subsequent users and distributors not impose additional restrictions upon subsequent distribution. As Stallman has noted, copyleft is built on a deliberate contradiction of copyright: “Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free” (Stallman 2002, 22). Avowed members of the free software movement also conceive of free software’s importance not just in technical terms but in moral terms as well. For them, the free software ecosystem is a moral-pedagogical space in which values are reproduced and developers’ skills are fostered through unfettered access to free software (Kelty 2008).

    “Open source” software derives its name from a push—years after Stallman’s cri de coeur—that stressed non-proprietary software’s potential in the business world. Advocates of the open source framing downplayed free software’s origins in the libertarian-individualist ethos of the early free software movement. They discarded its rhetorics of individual freedom in favor of the invocation of “innovation,” “openness,” and neoliberal subjectivity. Toward the end of the twentieth century, open source activists “partially codified this philosophical frame by establishing a clear priority for pragmatic technical achievement over ideology (which was more central to the culture of the Free Software Foundation)” (Weber 2005, 165). In the current moment, antagonisms between proponents of the respective terminologies are comparatively muted. In many FLOSS developer spaces, the most commonly-avowed view is that the practical upshot of the differences in emphasis between “free” and “open source” is unimportant: the typical user or producer doesn’t care, and the immediate social consequences of the distinction are close to nil. (It is noteworthy that this framing is fully compatible with the self-consciously technicist, pragmatic framing of the open source movement, less so with the ideological commitments of the free software movement. Whether or not it is the case at the micro level that free software and open source software retain meaningfully different political valences is beyond the scope of this essay, although it is possible that voices welcoming an elision of “free” and “open source” do protest too much.)

    FLOSS is situated at the intersection of several trends and tendencies. It is a body of technical practice (hacking or coding); it is also a political-ethical formation. FLOSS is an integral component of capitalist software development—but it is also a hobbyist’s toy and a creator’s instrument (Kelty 2008), a would-be entrepreneur’s tool (Weber 2005), and an increasingly essential piece of academic kit (see, e.g., Coleman 2012). A generation of scholarship in anthropology, cultural studies, history, sociology, and other related fields has established that FLOSS is an appropriate object of study not only because its participants are typically invested in the internet-as-emancipatory-technology narrative, but also because free and open source software development has been profoundly consequential for both the cultural and technical character of the present-day information commons.

    In the remainder of the essay, I gesture at a critique of this view of the internet’s alleged emancipatory potential by examining its underlying assumptions and the theory of democracy to which it adheres. This theory trades on the idea that democracy is an ethical practice, one that achieves its fullest expression in the absence of coercion and the promotion of deliberative norms. This approach to thinking about democracy has numerous analogues in current debates in political theory and political philosophy. In prevailing models of liberal politics, institutions and ethical constraints are privileged over concepts like organization, contestation, and—above all—the pursuit and exercise of power. Indeed, within contemporary liberal political thought it is sometimes difficult to discern the activity of thinking about politics as such. I do not argue here for the merits of contestatory democracy, nor do I conceal an unease with the depoliticizing tendencies of deliberative democracy, or with the tendency to substitute the ethical for the political. Instead I draw out the theoretical commonalities between the emergence of deliberative democracy and the turn toward the digital in relations of production and reproduction. I suggest that critiques of the shortcomings of liberal thought regarding political activity and political persuasion are also applicable to the social and political claims and propositions that undergird the strategies and rhetorics of FLOSS. The hierarchies of commitment that one finds in contemporary liberalism may be detected in FLOSS thought as well. Liberalism typically prioritizes intersubjectivity over mass political action and contestation. Similarly, FLOSS rhetoric focuses on ethical persuasion rather than the pursuit of influence and social power such that proprietarian computing may be resisted or challenged. Liberalism also prioritizes property relations over other social relations. The FLOSS movement similarly retains a stark commitment to the priority of liberal property relations and to the idea of personal property in digital commodities (Pedersen 2010).

    In the context of FLOSS and the information commons, a depoliticized theory of democracy fails to attend to the dynamics of power, and to crucial considerations of political economy in communications and computing. An insistence on conceiving of democracy as an ethical aspiration or as a moral ideal—rather than as a practice of mass politics with a given historical and institutional specificity—serves to obscure crucial features of the internet as a cultural and social phenomenon. It also grants an illusory warrant for ideological claims to the effect that computing and internet-mediated communication constitute meaningful and consequential forms of civic participation and political engagement. As the ethical displaces the political, so the technological displaces the ethical. In the process, the workings of power are obscured, the ideological trappings of technologically-mediated domination are mystified, and the social forms that are peculiar to internet subcultures are naturalized as typifying the form of social organization that all democrats ought to seek after.

    In identifying the theoretical affinities between the liberalism of the digital turn and the ethical turn in liberal political theory, I hope to contribute to an enriched, interdisciplinary understanding of the available spaces for investigation and research with respect to emerging trends in digital life. The social relations that are both constituted by and constitutive of the worlds of software, networked communication, and pervasive computing are rightly becoming the objects of sustained study within disparate fields in humanistic disciplines. This essay aims at provoking new questions in such study by examining the theoretical linkages between the digital turn and the ethical turn.

    The Digital Turn

    The internet—considered in the broadest possible sense, as something comprised of networks and terminals through which various forms of sociality are mediated electronically—attracts, of course, no small amount of academic, elite, and popular attention. A familiar story tends to arise out of these attentions. The digital turn ushers in the promise of digital democracy: an expansion of opportunities for participation in politics (Klein 1999), and a revolutionizing of communications that connects individuals in networks (Castells 2010) of informed and engaged consumers and producers of non-material content (Shirky 2008). Dissent would prove impossible to stifle, as information—endowed with its own virtual, composite personality, and empowered by sophisticated technologies—would both want and be able to be free. “The Net interprets censorship as damage and routes around it” (as cited in Reagle 1999) is a famous—and possibly apocryphal—variant of this piece of folk wisdom. Pervasive networked computing ensures that citizens will be self-mobilizing in their participation in politics and in their scrutiny of corruption and rights abuses. Capital, meanwhile, can anticipate a new suite of needs to be satisfied through informational commodities. The only losers are governments that, despite enthusiastic rhetoric about an “information superhighway,” are unable to keep pace with technological growth, or with popular adoption of decentralized communications media. Their capacities to restrict or control discourse will be crippled; their control over their own populations will diminish in proportion to the growth of electronically-mediated communication.[2]

    Much of the excitement over the internet is freighted with neoliberal (Brown 2005) ideology, either in implicit or explicit terms. On this view, liberalism’s focus on the unfettered movement of commodities and the unrestricted consumption activities of individuals will find its final and definitive instantiation in a world of digital objects (with a marginal cost approaching zero) and the satisfaction of consumer needs through novel and innovative patterns of distribution. The cultural commons may be reclaimed through transformations of digital labor—social, collaborative, and remix-friendly (Benkler 2006). Problems of production can be solved through increasingly sophisticated chains of logistics (Bonacich and Wilson 2008), finally fulfilling the unrealized cybernetic dreams of planners and futurists in the twentieth century.[3] Political superintendence of the market—and many other social fields—will be rendered redundant by rapid, unmediated feedback mechanisms linking producers and consumers. This contradictory utopia will achieve a non-coercive panopticon of full information, made possible through the endless concatenation of individual decisions to consume, evaluate, and generate information (Shirky 2008).

    This prediction has not been vindicated. Contemporary observers of the internet age do not typically describe it in terms of democratic vistas and cultural efflorescence. They are likelier to examine it in terms of the extension of technologies of control and surveillance, and in terms of the subsumption of sociality under the regime of neoliberal capital accumulation. Indeed, the digital turn follows a trajectory similar to that of the neoliberal turn in governance. The neoliberal turn has enhanced rather than undermined the capacity of the state. Those capacities are directed not at the provision of public goods and social services but rather coercive security and labor discipline. The digital turn’s course has decidedly not been one of individual empowerment and an expansion of the scope of participatory forms of democratic politics. Instead, networked computing is now a profit center for a small number of titanic capitals. Certainly, the revolution in communications technology has influenced social relations. But the political consequences of that influence do not constitute a profound transformation and extension of democracy (Hindman 2008). Nor are the consequences of the revolution in communications uniformly emancipatory (Morozov 2011). More generally, the subsumption of greater swathes of sociality within the logics of computing presents the risk of the enclosure of public information, and of the extension of the capabilities of the powerful to surveil and coerce others while evading public supervision (Drahos 2002, Golumbia 2009, Pasquale 2015).

    Extensive critiques of “the Californian ideology” (Barbrook and Cameron 2002), renascent “cyberlibertarianism” (Dahlberg 2010) and its affinities with longstanding currents in right-wing thought (Golumbia 2013), and related ideological formations are all ready to hand. The digital turn is of course not characterized by a singular politics. However, the hegemonic political tendency associated with it may be fairly described as a complex of libertarian ideology, neoliberal political economy, and antistatist rhetoric. The material substrate for this complex is the burgeoning arena of capitals pursuing profits through the exploitation of “digital labor” (Fuchs 2014). Such labor occurs in software development, but also in hardware manufacturing; the buying, selling, and licensing of intellectual property; and the extractive industries providing the necessary mineral ores, rare earth metals, and other primary inputs for the production of computers (on this point see especially Dyer-Witheford 2015). The growth of this sector has been accomplished through the exploitation of racialized and marginalized populations (see, for example, Amrute 2016), the expropriation of the commons through the transformation of public assets into private property, and the decoupling in the public mind of any link between easily accessed electronic media and computing power, on the one hand, and massive power consumption and environmental devastation, on the other.

    To the extent that hopes for the emancipatory potential of a cyberlibertarian future have been dashed, enthusiasm for the left-right hybrid politics that first bruited it is still widespread. In areas in which emancipatory hopes remain unchastened by the experience of capital’s colonization of the information commons, that enthusiasm is undiminished. FLOSS movements are important examples of such areas. In FLOSS communities and spaces, left-liberal commitments to social justice causes are frequently melded with a neoliberal faith in decentralized, autonomous activity in the development, deployment, and maintenance of computing processes. When FLOSS activists self-reflexively articulate their political commitments, they adopt rhetorics of democracy and cooperative self-determination that are broadly left-liberal. However, the politics of FLOSS, like hacker politics in general, also betray a right-libertarian fixation on the removal of obstacles to individual wills. The hacker’s political horizon is the unfettering of the socially untethered, electronically empowered self (Borsook 2000). Similarly, the liberal commitments that undergird contemporary theories of “deliberative democracy” are easily adapted to serve libertarian visions of the good society.

    The Ethical and the Political

    The liberalism of such political theory as is encountered in FLOSS discourse may be fruitfully compared to the turn toward deliberative models of social organization. This turn is characterized by a dual trend in postwar political thought, centrally but not exclusively limited to the North Atlantic academy.  It consists of the elision of theoretical distinctions between individual ethical practice and democratic citizenship, while increasing the theoretical gap between agonistic practices—contestation, conflict, direction action—and policy-making within the institutional context of liberal constitutionality. The political is often equated with conflict—and thereby, potentially, violence or coercion. The ethical, by contrast, comes closer to resembling democracy as such. Democracy is, or ought to be, “depoliticized” (Pettit 2004); deliberative democracy, aimed at the realization of ethical consensus, is normatively prior to aggregative democracy or the mere counting of votes. On this view, the historical task of democracy is not to grant greater social purchase to political tendencies or formations; nor does it consist in forging tighter links between decision-making institutions and the popular will. Rather, democracy is a legitimation project, under which the decisions of representative elites are justified in terms of the publicity of the reasons or justifications supplied on their behalf. The uncertain movement between these two poles—conceiving of democracy as a normative ideal, and conceiving of it as a description of adequately legitimated institutions—is hardly unique to contemporary democratic theory. The turn toward the deliberative and the ethical is distinguished by the narrowness of its conception of the democratic—indeed by its insistence that the democratic, properly understood, is characterized by the dampening of political conflict and a tendential movement toward consensus.

    Why ought we consider the trajectory of postwar liberal thought in conjunction with the digital turn? First, there are, of course, similarities and continuities between the fortunes of liberal ideology in both the world of software work and the world of academic labor. The former is marked to a much greater extent by a widespread distrust of mechanisms of governance and is indelibly marked by outpourings of an ascendant strain of libertarian triumphalism. Where ideological development in software work has charted a libertarian course, in academic Anglophone political thought it has more closely followed a path of neoliberal restructuring. To the extent that we maintain an interest in the consequences of the digitization of sociality, it is germane and appropriate to consider liberalism in software work and liberalism in professional political theory in tandem. However, there is a rather more important reason to chart the movement of liberal political thought in this context: many of the debates, problematics, and proffered solutions in the politico-ideological discourse in the world of software work are, as it were, always already present in liberal democratic theory. As such, an examination of the ethical turn—liberal democratic theory’s disavowal of contestation, and of the agon that interpellates structures of politics (Mouffe 2005, 80–105)—can aid further, subsequent examinations of the ontological, methodological, and normative presuppositions that inform the self-understanding of formations and tendencies within FLOSS movements. Both FLOSS discourses and professional democratic theory tend to discharge conclusions in favor of a depoliticized form of democracy.

    Deliberative democracy’s roots lie in liberal legitimation projects begun in response to challenges from below and outside existing power structures. Despite effacing its own political content, deliberative democracy must nevertheless be understood as a political project. Notable gestures toward the concept may be found in John Rawls’s theory-building project, beginning with A Theory of Justice (1971); and in Jürgen Habermas’s attempts to render the intellectual legacy of the Frankfurt School compatible with postwar liberalism, culminating in Between Facts and Norms (1996). These philosophical moves were being made at the same time as the fragmentation of the postwar political and economic consensus in developed capitalist democracies. Critics have detected a trend toward retrenchment in both currents: the evacuation of political economy—let alone Marxian thought—from critical theory; the accommodation made by Rawls and his epigones with public choice theory and neoliberal economic frames. The turn from contestatory politics in Anglophone political thought was simultaneous with the rise of a sense that the institutional continuity and stability of democracy were in greater need of defense than were demands for political criticism and social transformation. By the end of the postwar boom years, an accommodation with “neoliberal governmentality” (Brown 2015) was under way throughout North Atlantic intellectual life. The horizons of imagined political possibility were contracting at the very conjuncture when labor movements and left political formations foundered in the face of the consolidation of the capitalist restructuring under way since the third quarter of the twentieth century.

    Rawls’s account of justified institutions does not place a great emphasis on mass politics; nor does Habermas’s delineation of the boundaries of the ideal circumstances for communication—except insofar as the memory of fascism that Habermas inherited from the Frankfurt School weighs heavily on his forays into democratic theory. Mass politics is an inherently suspect category in Habermas’s thought. It is telling—and by no means surprising—that the two heavyweight theorists of North Atlantic postwar social democracy are primarily concerned with political institutions and with “the ideal speech situation” (Habermas 1996, 322–328) rather than with mass politics. They are both concerned with making justificatory moves rather than with exploring the possibilities and limits to mass politics and collective action. Rawls’s theory of justice describes a technocratic scheme for a minimally redistributive social democratic polity, while Habermas’s oeuvre has increasingly come to serve as the most sophisticated philosophical brief on behalf of the project of European cosmopolitan liberalism. Within the confines of this essay it is impossible to engage in a sustained consideration of the full sweep of Rawls’s political theory, including his conception of an egalitarian and redistributive polity and his constructivist account of political justification; similarly, the survey of Habermas presented here is necessarily compressed and abstracted. I restrict the scope of my critical gestures to the contributions made by Rawls and Habermas to the articulation of a deliberative conception of democracy. In this respect, they were strikingly similar:

    Both Rawls and Habermas assert, albeit in different ways, that the aim of democracy is to establish a rational agreement in the public sphere. Their theories differ with respect to the procedures of deliberation that are needed to reach it, but their objective is the same: to reach a consensus, without exclusion, on the ‘common good.’ Although they claim to be pluralist, it is clear that theirs is a pluralism whose legitimacy is only recognized in the private sphere and that it has no constitutive place in the public one. They are adamant that democratic politics requires the elimination of passions from the public sphere. (Mouffe 2013, 55)

    In neither Rawls’s nor Habermas’s writings is the theory of deliberative democracy simply the expression of a preference for the procedural over the substantive. It is better understood as a preference for unity and consensus, coupled with a minoritarian suspicion of the institutions and norms of mass electoral democracy. It is true that both their deliberative democratic theories evince considerable concern for the procedures and conditions under which issues are identified, alternatives are articulated, and decisions are made. However, this concern is motivated by a preoccupation with a particular substantive interest: specifically, the reproduction of liberal democratic forms. Such forms are valued not for their own sake—indeed, that would verge on incoherence—but because they are held to secure certain moral ends: respect for individuals, reciprocity of regard or recognition between persons, the banishment of coercion from public life, and so on. The ends of politics are framed in terms of morality—a system of universal duties or ends. The task of political theory is to envision institutions which can secure ends or goods that may be seen as intrinsically desirable. Notions that the political might be an autonomous domain of human activity, or that political theory’s ambit extends beyond making sense of existing configurations of institutions, are discarded. In their place is an approach to political thought rooted in concerns about technologies of governance. Such an approach concerns itself with political disagreement primarily insofar as it is a foreseeable problem that must be managed and contained.

    Depoliticized, deliberative democracy may be characterized as one or more of several forms of commitment to an apolitical conception of social organization. It is methodologically individualist: it takes the (adult, sociologically normative and therefore likely white and cis-male) individual person as the appropriate object of analysis and as the denominator to which social structures ultimately reduce. It is often intersubjective in its model of communication: that is, ideas are transmitted by and between individuals, typically or ideally two individuals standing in a relation of uncoerced respect with one another. It is usually deliberative in the kind of decision-making it privileges: authoritative decisions arise not out of majoritarian voting mechanisms or mass expressions of collective will, but rather out of discursive encounters that encourage the formation and exchange of claims whose content conform to specific substantive criteria. It is often predicated on the notion that the most valuable or self-constitutive of individuals’ beliefs and understandings are pre-political: individual rational agents are “self-authenticating sources of valid claims” (Rawls 2001, 23). Their claims are treated as exogenous to the social and political contexts in which they are found. Depoliticized democracy is frequently racialized and erected on a series of assumptions and cultural logics of hierarchy and domination (Mills 1997). Finally, depoliticized democracy insists on a particular hermeneutic horizon: the publicity of reasons. For any claim to be considered credible, and for public exercises to be considered legitimate, they must be comprehensible in terms of the worldviews, held premises, or anterior normative commitments of all persons who might somehow be affected by them.

    Theories of deliberative democracy are not merely suspicious of political disagreement—they typically treat it as pathological. Social cleavages over ideology (which may always be reduced to the concatenation of individual deliberations) are evidence either of bad faith argumentation or a failure to apprehend the true nature of the common good. To the extent that deliberative democracy is not nakedly elitist, it ascribes to those democratic polities it considers well-formed a capacity for a peculiar kind of authority. Such collectivities are capable, by virtue of their well-formed deliberative structures, of discharging decisions that are binding precisely because they are correct with reference to standards that are anterior to any dialectic that might take place within the social body itself. Consequently, much depends on the ideological content of those standards.

    The concept of public reason has acquired special potency in the hands of Rawls’s legatees in North American analytic political philosophy. Similar in aim to Habermas’s ideal speech situation, the modern idea of public reason is meant to model an ideal state of deliberative democracy. Rawls locates its origins in Rousseau (Rawls 2007, 231). However, it acquires a specifically Kantian conception in his elaboration (Rawls 2001, 91–94), and an extensive literature in analytic political philosophy is devoted to the elaboration of the concept in a Rawlsian mode (for a good recent discussion see Quong 2013). Public reason requires that contested policies’ justifications are comprehensible to those who controvert those policies. More generally, the polity in which the ideal public reason obtains is one in which interlocutors hold themselves to be obliged to share, to the extent possible, the premises from which political reasoning proceeds. Arguments that are deemed to originate from outside the boundaries of public reason cannot serve a legitimating function. Public reason usually finds expression in the writings of liberal theorists as an explanation for why controverted policies or decisions may nevertheless be viewed as substantively appropriate and democratically legitimated.

    Proponents of public reason often cast the ideal as a commonplace of reasonable discussion that merely binds interlocutors to deliberate in good faith. However, public reason may also be described as a cudgel with which to police the boundaries of debate. It effectively cedes discursive power to those who controvert public policy in order to control the trajectory of the discourse—if they are possessed of enough social power. Explicitly liberal in its philosophical genealogy, public reason is expressive of liberal democratic theory’s wariness with respect to both radical and reactionary politics. Many liberal theorists are primarily concerned to show how public reason constrains reactionaries from advancing arguments that rest on religious or theological grounds. An insistence on public reasonableness (perhaps framed through an appeal to norms of civility) may also allow the powerful to cavil at challenges to prevailing economic thought as well as to prevailing understandings of the relationship between the public and the religious.

    Habermas’s project on the communicative grounds of liberal democracy (1998) reflects a similar commitment to containing disagreement and establishing the parameters when and how citizens may contest political institutions and the rules they produce and enforce. His “discourse principle” (1996, 107) is not unlike Rawls’s conception of public reason in that it is intended to serve as a justificatory ground for deliberations tending toward consensus. According to the discourse principle, a given rule or law is justified if and only if those who are to be affected by it could accept it as the product of a reasonable discourse. Much of Habermas’s work—particularly Between Facts and Norms (1996)—is devoted to establishing the parameters of reasonable discourses. Such cartographies are laid out not with respect to controversies arising out of actually existing politics (such as pan-European integration or the problems of contemporary German right-wing politics). They are instead sited within the coordinates of Habermas’s specification of the linguistic and pragmatic contours of the social world in established constitutional democracies. The practical application of the discourse principle is often recursive, in that the particular implications and the scope of the discourse principle require further elaboration or extension within any given domain of practical activity in which the principle is invoked. Despite its rarefied abstraction, the discourse principle is meant in the final instance to be embedded in real activities and sites of discursive activity. (Habermas’s work in ethics parallels his discourse-theoretic approach to politics. His dialogical principle of universalization holds that moral norms are valid insofar as its observance—and the effects of that observance—would be accepted singly and jointly by all those affected.)

    Both Rawls and Habermas’s conceptions of the communicative activity underlying collective decision-making are strongly motivated by concerns for intersubjective ethical concerns. If anything, Habermas’s discourse ethics, and the parallel moves that he makes in his interventions in political thought, are more exacting than Rawls’s conception of public reason, both in terms of the discursive environments that they presuppose as well as the demands that they place upon individual interlocutors. Both thinkers’ views also conceive of political conflict as a field in which ethical questions predominate. Indeed, under these views political antagonism might be seen as pathological, or at least taken to be the locus of a sort of problem situation: If politics is taken to be a search for the common welfare (grounded in commonly-avowed terms), or is held to consist in the provision of public goods whose worth can, in principle, be agreed upon, then it would make sense to think that political antagonism is an ill to be avoided. Politics would then be exceptional, whereas the suspension of political antagonism for the sake of decisive, authoritative decision-making would be the norm. This is the core constitutive contradiction of the theory of deliberative democracy: the priority given to discussion and rationality tends to foreclose the possibility of contestation and disagreement.

    If, however, politics is a struggle for power in the pursuit of collective interests, it becomes harder to insist that the task of politics is to smooth over differences, rather than to articulate them and act upon them. Both Rawls and Habermas have been the subjects of extensive critique by proponents of several different perspectives in political theory. Communitarian critics have typically charged Rawls with relying on a too-atomized conception of individual subjects, whose preferences and beliefs are unformed by social, cultural or institutional contexts (Gutmann 1985); similar criticisms have been mounted against Habermas (see, for example, C. Taylor 1989). Both thinkers’ accounts of the foundations of political order fail to acknowledge the politically constitutive aspects of gender and sexuality (Okin 1989, Meehan 1995). From the perspective of a more radical conception of democracy, even Rawls’s later writings in which he claims to offer a constructivist (rather than metaphysical) account of political morality (Rawls 1993) does not necessarily pass muster, particularly given that his theory is fundamentally a brief for liberalism and not for the democratization of society (for elaboration of this claim see Wolin 1996).

    Deliberative democracy, considered as a prescriptive model of politics, represents a striking departure both from political thought on the right—typically preoccupied with maintaining cultural logics and preserving existing social hierarchies—and political thought on the left, which often emphasizes contingency, conflict, and the priority of collective action. Both of these latter approaches to politics take social phenomena as subjects of concern in and of themselves, and not merely as intermediate formations which reduce to individual subjectivity. The substitution of the ethical for the political marks an intellectual project that is adequate to the imperatives of a capitalist political economy. The contradictory merger of the ethical anxieties underpinning deliberative democratic theory and liberal democracy’s notional commitment to legitimation through popular sovereignty tends toward quietism and immobilism.

    FLOSS and Democracy

    The free and open source software movements are cases of distinct importance in the emergence of digital democracy. Their traditions, and many of the actors who participate in them, antedate the digital turn considerably: the free software movement began in earnest in the mid-1980s, while its social and technical roots may be traced further back and are tangled with countercultural trends in computing in the 1970s. The movements display durable commitments to ethical democracy in their rhetoric, their organizational strategies, and the philosophical presuppositions that are revealed in their aims and activities (Coleman 2012).

    FLOSS is sited at the intersection of many of liberal democratic theory’s desiderata. These are property, persuasion, rights, and ethics. The movement is a flawed, incompletely successful, but suggestive and instructive attempt at reconfiguring capitalist property relations—importantly, and fatally, from inside of an existing set of capitalist property relations—for the sake of realizing liberal ethical commitments with respect to expression, communication, and above all personal autonomy. Self-conscious hackers in the world of FLOSS conceive of their shared goals as the maximization of individual freedom with respect to the use of computers. Coleman describes how many hackers conceive of this activity in explicitly ethical terms. For them, hacking is a vital expression of individual freedom—simultaneously an aesthetic posture as well as a furtherance of specific ethical projects (such as the dissemination of information, or the empowerment of the alienated subject).

    The origins of the free software movement are found in the countercultural currents of computing in the 1970s, when several lines of inquiry and speculation converged: cybernetics, decentralization, critiques of bureaucratic organization, and burgeoning individualist libertarianism. Early hacker values—such as unfettered sharing and collaboration, a suspicion of distant authority given expression through decentralization and redundancy, and the maximization of the latitude of individual coders and users to alter and deploy software as they see fit—might be seen as the outflowing of several political traditions, notably participatory democracy and mutualist forms of anarchism. Certainly, the computing counterculture born in the 1970s was self-consciously opposed to what it saw as the bureaucratized, sclerotic, and conformist culture of major computing firms and research laboratories (Barbrook and Cameron 2002). Richard Stallman’s 1985 declaration of the need for, and the principles underlying, the free development of software is often treated as the locus classicus of the movement (Stallman, The GNU Manifesto 2015). Stallman succeeded in instigating a narrow kind of movement, one whose social specifity it is possible to trace. Its social basis consisted of communities of software developers, analysts, administrators, and hobbyists—in a word, hackers—that shared Stallman’s concerns over the subsumption of software development under the value-expanding imperatives of capital. As they saw it, the values of hacking were threatened by a proprietarian software development model predicated on the enclosure of the intellectual commons.

    Democracy, as it is championed by FLOSS advocates, is not necessarily an ideal of well-ordered constitutional forms and institutions whose procedures are grounded in norms of reciprocity and intersubjective rationality. It is characterized by a tension between an enthusiasm for volatile forms of participatory democracy and a tendency toward deference to the competence or charisma (the two are frequently conflated) of leaders. Nevertheless, the parallels between the two political projects—deliberative democracy and hacker liberation under the banner of FLOSS—are striking. Both projects share an emphasis on the persuasion of individuals, such that intersubjective rationality is the test of the permissibility of power arrangements or use restrictions. As such, both projects—insofar as they are to be considered to be interventions in politics—are necessarily self-limiting.

    Exponents of digital democracy rely on a conception of democracy that is strikingly similar to the theory of ethical democracy considered above. The constitutive documents and inscriptive commitments of various FLOSS communities bear witness to this. FLOSS communities should attract our interest because they are frequently animated by ethical and political concerns which appear to be liberal—even left-liberal—rather than libertarian. Barbrook and Cameron’s “Californian ideology” is frequently manifested in libertarian rhetorics that tend to have a right-wing grounding. The rise of Bitcoin is also a particularly resonant recent example (Golumbia 2016). The adulation that accompanies the accumulation of wealth in Silicon Valley furnishes a more abstract example of the ideological celebration of acquisitive amour propre in computing’s social relations. The ideological substrate of commercial computing is palpably right-wing, at least in its orientation to political economy. As such it is all the more noteworthy that the ideological commitments of many FLOSS projects appear to be animated by ethico-political concerns that are more typical of left-liberalism, such as: consensus-seeking modes of collective decision-making; recognition of the struggles and claims of members of marginalized or oppressed groups; and the affirmation of differing identifies.

    Free software rhetoric relies on concepts like liberty and freedom (Free Software Foundation 2016). It is in this rhetoric that free software’s imbrication within capitalist property relations is most apparent:

    Freedom means having control over your own life. If you use a program to carry out activities in your life, your freedom depends on your having control over the program. You deserve to have control over the programs you use, and all the more so when you use them for something important in your life. (Stallman 2015)

    Stallman’s equation of freedom with control—self-control—is telling: Copyleft does not subvert copyright; it depends upon it. Hacking is dependent upon the corporate structure of industrial software development. It is embedded in the social matrix of closed-source software production, even though hackers tend to believe that “their expertise will keep them on the upside of the technology curve that protects the best and brightest from proletarianization” (Ross 2009, 168). A dual contradiction is at work here. First, copyleft inverts copyright in order to produce social conditions in which free software production may occur. Second, copyleft nevertheless remains dependent on closed-source software development for its own social reproduction. Without the state power that is necessary for contracts to be enforced, or without the reproduction of technical knowledge that is underwritten by capital’s continued interest in software development, FLOSS loses its social base. Artisanal hacking or digital homesteading could not enter into the void were capitalist computing to suddenly disappear. The decentralized production of software is largely epiphenomenal upon the centralized and highly cooperative models of development and deployment that typify commercial software development. The openness of development stands in uneasy contrast with the hierarchical organization of the management and direction of software firms (Russell 2014).

    Capital has accommodated free and open source software with little difficulty, as can be seen in the expansion of the open source software movement. As noted above, many advocates of both the free software and open source software movements frequently aver that their commitments overlap to the point that any differences are largely ones of emphasis. Nevertheless, open source software differs—in an ideal, if not political, sense—from free software in its distinct orientation to the value of freedom: it is something which is to be valued as the absence of the fetters on coding, design, and debugging that characterize proprietary software development. As such open source software trades on an interpretation of freedom that is rather distinct from the ethical individualism of free software. Indeed, it is more recognizably politically adjacent to right-wing libertarianism. This may be seen, for example, in the writings of His influential essay “The Cathedral and the Bazaar” is a paean not to the emancipatory potential of open source software but its adaptability and suitability for large-scale, rapid-turnover software development—and its amenability to the prerogatives of capital (Raymond 2000).

    One of the key ethical arguments made by free and open source software advocates rests on an understanding of property that is historically specific. The conception of property deployed within FLOSS is the absolute and total right of owners to dispose of their possessions—a form of property rights that is peculiar to the juridical apparatus of capitalism. There are, of course, superficial resemblances between software license agreements—which curtail the rights of those who buy hardware with pre-installed commercial software, for example—and the seigneurial prerogatives associated with feudalism. However, the specific set of property relations underpinning capitalist software development is also the same set of property relations that are traded upon in FLOSS theory. FLOSS criticism of proprietary software rarely extends to a criticism of private property as such. Ethical arguments for the expansion of personal computing freedoms, made with respect to the prevailing set of property relations, frequently focus on consumption. The focus may be positive: the freedom of the individual finds expression in the autonomy of the rational consumer of commodities. Or the focus may be negative: individual users must eschew a consumerist approach to computing or they will be left at the mercy of corporate owners of proprietary software.

    Arguments erected on premises about individual consumption choices are not easily extended to the sphere of collective political action. They do not discharge calls for pressuring political institutions or pursuing public power. The Free Software Foundation, the main organizational node of the free software movement, addresses itself to individual users (and individual capitalist firms) and places its faith in the ersatz property relations made possible by copyleft’s parasitism on copyright. The FSF’s ostensible non-alignment is really complementary, rather than antagonistic with, the alignments of major open source organizations. Organizations associated with the open source software movement are eager to find institutional partners in the business world. It is certainly the case that in the world of commercial computing, the open source approach has been embraced as an effective means for socializing the costs of software production (and the reproduction of software development capacities) while privatizing the monetary rewards that can be realized on the basis of commodified software. Meanwhile, the writings of Stallman and the promotional literature of the Free Software Foundation eschew the kind of broad-based political strategy that their analysis would seem to militate for, one in which FLOSS movements would join up with other social movements. An immobilist tendency toward a single-issue approach to politics is characteristic of FLOSS at large.

    One aspect of deliberative democracy—an aspect that is, as we have seen treated as banal in an unproblematic by many theorists of liberalism—that is often given greater emphasis by active proponents of digital democracy is the primacy of liberal property relations. Property relations take on special urgency in the discourse and praxis of free and open source software movements. Particularly in the propaganda and apologia of the open source movement, the personal computer is the ultimate form of personal property. More than that—it is an extension of the self. Computers are intimately enmeshed in human lives, to a degree even greater than was the case thirty years ago. To many hackers, the possibility that the code executed on their machines is beyond their inspection is a violation of their individual autonomy. Tellingly, analogies for this putative loss of freedom take as their postulates the “normal,” extant ways in which owners relate to the commodities they have purchased. (For example, running proprietary code on a computer may be analogized to driving a car whose hood cannot be opened.)

    Consider the Debian Social Contract, which encodes a variety of liberal principles as the constitutive political materials of the Debian project, adopted in the wake of a series of controversies and debates about gender imbalance (O’Neil 2009, 129–146). That the project’s constitutive document is self-reflexively liberal is signaled in its very title: it presupposes liberal concerns with the maximization of personal freedom and the minimization of coercion, all under the rubric of cooperation for a shared goal. The Debian Social Contract was the product of internal struggles within the Debian project, which aims to produce a technically sophisticated and yet ethically grounded version of the GNU/Linux operating system. It represents the ascendancy of a tendency within the Debian project that sought to affirm the project’s emancipatory aims. This is not to suggest that, prior to the adoption of the Social Contract, the project was characterized by an uncontested focus on technical expertise, at the direct expense of an emancipatory vision of FLOSS computing; nevertheless, the experience decisively shifted Debian’s trajectory such that it was no longer parallel with that of related projects.

    Another example of FLOSS’s fetishism for non-coercive, individual-centered ethics may be found in the emphasis placed on maximizing individual user freedom. The FSF, for example, considers it a violation of user autonomy to make the use of free, open source software conditional by restricting its use—even only notionally—to legal or morally-sanctioned use cases. As is often the case when individualist libertarianism comes into contact with practical politics, an obstinate insistence on abstract principles discharges absurd commitments. The major stakeholders and organizational nodes in the free software movement—the FSF, the GNU development community, and so on—refuse even to censure the use of free software in situations characterized by the restriction or violation of personal freedoms: military computing, governmental surveillance, and so on.

    It must also be noted that the hacker ethos is at least partially coterminous with cyberlibertarianism. Found in both is the tendency to see the digital sphere as both the vindication of neoliberal economic precepts as well as the ideal terrain in which to pursue right-wing social projects. From the user’s perspective, cyberlibertarianism is presented as a license to use and appropriate the work of others who have made their works available for such purposes. It may perhaps be said that cyberlibertarianism is the ethos of the alienated monad pursuing jouissance through the acquisition of technical mastery and control over a personal object, the computer.

    Persuasion and Contestation

    We are now in a position to examine the contradictions in the theory of politics that informs FLOSS activity. These contradictions converge at two distinct—though certainly related—sites. The first site centers on power, and interest aggregation; the second, on property and the claims of users over their machines and data. An elaboration and examination of these contradictions will suggest that, far from overcoming or transcending the contradictions of liberalism as they inhere either in contemporary political practice or in liberal political thought, FLOSS hackers and activists have reproduced them in their practices as well as in their texts.

    The first site of contradiction centers on politics. FLOSS advocates adhere to an understanding of politics that emphasizes moral suasion and that valorizes the autonomy of the individual to pursue chosen projects and satisfy their own preferences. This despite the fact that the primary antagonists in the FLOSS political imaginary—corporate owners of IP portfolios, developers and retailers of proprietary software, and policy-makers and bureaucrats—possess considerable political, legal, and social power. FLOSS discourses counterpose to this power, not counterpower but evasion, escape, and exit. Copyleft itself may be characterized as evasive, but more central here is the insistence that FLOSS is an ethical rather than a political project, in which individual developers and users must not be corralled into particular formations that might use their collective strength to demand concessions or transform digitally mediated social relations. This disavowal of politics directly inhibits the articulation of counter-positions and the pursuit of counterpower.

    So long as FLOSS as a political orientation remains grounded in a strategic posture of libertarian individualism and interpersonal moral suasion, it will be unable to effectively underwrite demands or place significant pressures on institutions and decision-making bodies. FLOSS political rhetoric trades heavily on tropes of individual sovereignty, egalitarian epistemologies, and participatory modes of decision-making. Such rhetorics align comfortably with the currently prevailing consensus regarding the aims and methods of democratic politics, but when relied on naïvely or uncritically, they place severe limits on the capacity for the FLOSS movement to expand its political horizons, or indeed to assert itself in such a way as to become a force to be reckoned with.

    The second site of contradiction is centered on property relations. In the self-reflexive and carefully articulated discourse of FLOSS advocates, persons are treated as ethical agents, but such agents are primarily concerned with questions of the disposition of their property—most importantly, their personal computing devices. Free software advocates, in particular, emphasize the importance of users’ freedoms, but their attentiveness to such freedoms appears to end at the interface between owner and machine. More generally, property relations are foregrounded in FLOSS discourse even as such discourse draws upon and deploys copyleft in order to weaponize intellectual property law against proprietarian use cases.

    For so long as FLOSS as a social practice remains centered on copyleft, it will reproduce and reinforce the property relations which sustain a scarcity economy of intellectual creations. Copyleft is commonly understood as an ingenious solution to what is seen as an inherent tendency in the world of software towards restrictions on access, limitations on communication and exchange of information, and the diminution of the informational commons. However, these tendencies are more appropriately conceived of as notably enduring features of the political economy of capitalism itself. Copyleft cannot dismantle a juridical framework heavily weighted in favor of ownership in intellectual property from the inside—no more so than a worker-controlled-and-operated enterprise threatens the circuits of commodity production and exchange that comprise capitalism as a set of social relations. Moreover, major FLOSS advocates—including the FSF and the Open Source Initiative—proudly note the reliance of capitalist firms on open source software in their FAQs, press releases, and media materials. Such a posture—welcoming the embrace of FLOSS the software industry, with its attendant practices of labor discipline and domination, customer and citizen surveillance, and privatization of data—stands in contradiction with putative FLOSS values like collaborative production, code transparency, and user freedom.

    The persistence—even, in some respects, the flourishing—of FLOSS in the current moment represents a considerable achievement. Capitalism’s tendency toward crisis continues to impel social relations toward the subsumption of more and more of the social under the rubric of commodity production and exchange. And yet it is still the case that access to computing processes, logics, and resources remains substantially unrestricted by legal or commercial barriers. Much of this must be credited to the efforts of FLOSS activists. The first cohort of FLOSS activists recognized that resisting the commodification of the information commons was a social struggle—not simply a technical challenge—and sought to combat it. That they did so according to the logic of single-issue interest group activism, rather than in solidarity with a broader struggle against commodification, should perhaps not be surprising; in the final quarter of the twentieth century, broad struggles for power and recognition by and on behalf of workers and the poor were at their lowest ebb in a century, and a reconfiguration of elite power in the state and capitalism was well under way. Cross-class, multiracial, and gender-inclusive social movements were losing traction in the face of retrenchment by a newly emboldened ruling class; and the conceptual space occupied by such work was contested. Articulating their interests and claims as participants in liberal interest group politics was by no means the poorest available strategic choice for FLOSS proponents.

    The contradictions of such an approach have nevertheless developed apace, such that the current limitations and impasses faced by FLOSS movements appear more or less intractable. Free and open source software is integral to the operations of some of the largest firms in economic history. Facebook (2018), Apple (2018), and Google (Alphabet, Inc. 2018), for example, all proudly declare their support of and involvement in open source development.[4] Millions of coders, hackers, and users can and do participate in widely (if unevenly) distributed networks of software development, debugging, and deployment. It is now a practical possibility for the home user to run and maintain a computer without proprietary software installed on it. Nevertheless, proprietary software development remains a staggeringly profitable undertaking, FLOSS hacking remains socially and technically dependent on closed computing, and the home computing market is utterly dominated by the production and sale of machines that ship with and run software that is opaque—by design and by law—to the user’s inspection and modification. These limitations are compounded by FLOSS movements’ contradictions with respect to property relations and political strategy.

    Implications and Further Questions

    The paradoxes and contradictions that attend both the practice and theory of digital democracy in the FLOSS movements bear strong family resemblances to the paradoxes and contradictions that inhere in much contemporary liberal political theory. Liberal democratic theory is frequently committed to the melding of a commitment to rational legitimation with the affirmation of the ideal of popular sovereignty; but an insistence on rational authority tends to undermine the insurgent potential of democratic mass action. Similarly, the public avowals of respect for human rights and the value of user freedom that characterize FLOSS rhetoric are in tension with a simultaneous insistence on moral suasion centered on individual subjectivity. What’s more, they are flatly contradicted by the stated commitments by prominent leaders and stakeholders in FLOSS communities in favor of capitalist labor relations and neutrality with respect to the social or moral consequences of the use of FLOSS. Liberal political theory is potentially self-negating to the extent that it discards the political in favor of the ethical. Similarly, FLOSS movements short-circuit much of FLOSS’s potential social value through a studied refusal to consider the merits of collective action or the necessity of social critique.

    The disjunctures between the rhetorics and stated goals of FLOSS movements and their actual practices and existing social configurations are deserving of greater attention from a variety of perspectives. I have approached those disjunctures through the lens of political theory, but these phenomena are also deserving of attention within other disciplines. The contradiction between FLOSS’s discursive fealty to the emancipatory potential of software and the dependence of FLOSS upon the property relations of capitalism merits further elaboration and exploration. The digital turn is too easily conflated with the democratization of a social world that is increasingly intermediated by networked computing. The prospects for such an opening up of digital public life remain dim.

    _____

    Rob Hunter is an independent scholar who holds a PhD in Politics from Princeton University.

    Back to the essay

    _____

    Acknowledgments

    [*] I am grateful to the b2o: An Online Journal editorial collective and to two anonymous reviewers for their feedback, suggestions, and criticism. Any and all errors in this article are mine alone. Correspondence should be directed to: jrh@rhunter.org.

    _____

    Notes

    [1] The notion of the digitally-empowered “sovereign individual” is adumbrated at length in an eponymous book by Davidson and Rees-Mogg (1999) that sets forth a right-wing techno-utopian vision of network-mediated politics—a reactionary pendant to liberal optimism about the digital turn. I am grateful to David Golumbia for this reference.

    [2] For simultaneous presentations and critiques of these arguments see, for example, Dahlberg and Siapera (2007), Margolis and Moreno-Riaño (2013), Morozov (2013), Taylor (2014), and Tufekci (2017).

    [3] See Bernes (2013) for a thorough presentation of the role of logistics in (re)producing social relations in the present moment.

    [4] “Google believes that open source is good for everyone. By being open and freely available, it enables and encourages collaboration and the development of technology, solving real world problems” (Alphabet, Inc. 2017).

    _____

    Works Cited

    • Alphabet, Inc. 2018. “Google Open Source.” (Accessed July 31, 2018.)
    • Amrute, Sareeta. 2016. Encoding Race, Encoding Class: Indian IT Workers in Berlin. Durham, NC: Duke University Press.
    • Apple Inc. 2018. “Open Source.” (Accessed July 31, 2018.)
    • Barbrook, Richard, and Andy Cameron. (1995) 2002. “The Californian Ideology.” In Peter Ludlow, ed., Crypto Anarchy, Cyberstates, and Pirate Utopias. Cambridge, MA: The MIT Press. 363–387.
    • Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.
    • Bernes, Jasper. 2013. “Logistics, Counterlogistics and the Communist Prospect.” Endnotes 3. 170–201.
    • Bonacich, Edna, and Jake Wilson. 2008. Getting the Goods: Ports, Labor, and the Logistics Revolution. Ithaca, NY: Cornell University Press.
    • Borsook, Paulina. 2000. Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech. New York: PublicAffairs.
    • Brown, Wendy. 2005. “Neoliberalism and the End of Liberal Democracy.” In Edgework: Critical Essays on Knowledge and Politics. Princeton, NJ: Princeton University Press. 37–59.
    • Brown, Wendy. 2015. Undoing the Demos: Neoliberalism’s Stealth Revolution. New York: Zone Books.
    • Castells, Manuel. 2010. The Rise of The Network Society. Malden, MA: Wiley-Blackwell.
    • Coleman, E. Gabriella. 2012. Coding Freedom: The Ethics and Aesthetics of Hacking. Princeton, NJ: Princeton University Press.
    • Dahlberg, Lincoln. 2010. “Cyber-Libertarianism 2.0: A Discourse Theory/Critical Political Economy Examination.” Cultural Politics 6:3. 331–356.
    • Dahlberg, Lincoln, and Eugenia Siapera. 2007. “Tracing Radical Democracy and the Internet.” In Lincoln Dahlberg and Eugenia Siapera, eds., Radical Democracy and the Internet: Interrogating Theory and Practice. Basingstoke: Palgrave. 1–16.
    • Davidson, James Dale, and William Rees-Mogg. 1999. The Sovereign Individual: Mastering the Transition to the Information Age. New York: Touchstone.
    • Drahos, Peter. 2002. Information Feudalism: Who Owns the Knowledge Economy? New York: The New Press.
    • Dyer-Witheford, Nick. 2015. Cyber-Proletariat: Global Labour in the Digital Vortex. London: Pluto Press.
    • Facebook, Inc. 2018. “Facebook Open Source.” (Accessed July 31, 2018.)
    • Free Software Foundation. 2018. “What Is Free Software?” (Accessed July 31, 2018.)
    • Fuchs, Christian. 2014. Digital Labour and Karl Marx. London: Routledge.
    • Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge, MA: Harvard University Press
    • Golumbia, David. 2013. “Cyberlibertarianism: The Extremist Foundations of ‘Digital Freedom.’” Uncomputing.
    • Golumbia, David. 2016. The Politics of Bitcoin: Software as Right-Wing Extremism. Minneapolis, MN: University of Minnesota Press.
    • Gutmann, Amy. 1985. “Communitarian Critics of Liberalism.” Philosophy and Public Affairs 14. 308–322.
    • Habermas, Jürgen. 1996. Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge, MA: MIT Press.
    • Habermas, Jürgen. 1998. The Inclusion of the Other. Edited by Ciarin P. Cronin and Pablo De Greiff. Cambridge, MA: MIT Press.
    • Hindman, Matthew. 2008. The Myth of Digital Democracy. Princeton, NJ: Princeton University Press.
    • Kelty, Christopher M. 2008. Two Bits: The Cultural Significance of Free Software. Durham, NC: Duke University Press.
    • Klein, Hans. 1999. “Tocqueville in Cyberspace: Using the Internet for Citizens Associations.” Technology and Society 15. 213–220.
    • Laclau, Ernesto, and Chantal Mouffe. 2014. Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. London: Verso.
    • Margolis, Michael, and Gerson Moreno-Riaño. 2013. The Prospect of Internet Democracy. Farnham: Ashgate.
    • Meehan, Johanna, ed. 1995. Feminists Read Habermas. New York: Routledge.
    • Mills, Charles W. 1997. The Racial Contract. Ithaca, NY: Cornell University Press.
    • Morozov, Evgeny. 2011. The Net Delusion: The Dark Side of Internet Freedom. New York: PublicAffairs.
    • Morozov, Evgeny. 2013. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs.
    • Mouffe, Chantal. 2005. The Democratic Paradox. London: Verso.
    • Mouffe, Chantal. 2013. Agonistics: Thinking the World Politically. London: Verso.
    • Okin, Susan Moller. 1989. “Justice as Fairness, For Whom?” In Justice, Gender and the Family. New York: Basic Books. 89–109.
    • O’Neil, Mathieu. 2009. Cyberchiefs: Autonomy and Authority in Online Tribes. London: Pluto Press.
    • Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
    • Pedersen, J. Martin. 2010. “Introduction: Property, Commoning and the Politics of Free Software.” The Commoner 14 (Winter). 8–48.
    • Pettit, Philip. 2004. “Depoliticizing Democracy.” Ratio Juris 17:1. 52–65.
    • Quong, Jonathan. 2013. “On the Idea of Public Reason.” In The Blackwell Companion to Rawls, edited by John Mandle and David A. Reidy. Oxford: Wiley-Blackwell. 265–280.
    • Rawls, John. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press.
    • Rawls, John. 1993. Political Liberalism. New York: Columbia University Press.
    • Rawls, John. 2001. Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press.
    • Rawls, John. 2007. Lectures in the History of Political Philosophy. Cambridge, MA: The Belknap Press of Harvard University Press.
    • Raymond, Eric S. 2000. The Cathedral and the Bazaar. Self-published.
    • Reagle, Joseph. 1999. “Why the Internet Is Good: Community Governance That Works Well.” Berkman Center.
    • Ross, Andrew. 2009. Nice Work If You Can Get It: Life and Labor in Precarious Times. New York: New York University Press.
    • Russell, Andrew L. 2014. Open Standards and the Digital Age. New York, NY: Cambridge University Press.
    • Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing Without Organizations. London: Penguin.
    • Stallman, Richard M. 2002. Free Software, Free Society: Selected Essays of Richard M. Stallman. Edited by Joshua Gay. Boston: GNU Press.
    • Stallman, Richard M. 2015. “Free Software Is Even More Important Now.” GNU.org.
    • Stallman, Richard M. 2015. “The GNU Manifesto.” GNU.org.
    • Taylor, Astra. 2014. The People’s Platform: Taking Back Power and Culture in the Digital Age . New York: Metropolitan Books.
    • Taylor, Charles. 1989. Sources of the Self. Cambridge, MA: Harvard University Press.
    • Tufekci, Zeynep. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, CT: Yale University Press.
    • Weber, Steven. 2005. The Success of Open Source. Cambridge, MA: Harvard University Press.
    • Wolin, Sheldon. 1996. “The Liberal/Democratic Divide: On Rawls’s Political Liberalism.” Political Theory 24. 97–119.

     

  • tante — Artificial Saviors

    tante — Artificial Saviors

    tante

    Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.

    Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.

    Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.

    A Promise Fulfilled?

    For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.

    In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.

    Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.

    Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.

    Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.

    Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.

    Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:

    The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)

    Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.

    These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.

    But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.

    And then came neural networks.

    Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.

    Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.

    There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.

    Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.

    Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?

    Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.

    In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.

    While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?

    John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.

    Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?

    But how can a neural network save us if it can’t even distinguish monkeys from humans?

    Thy Kingdom Come 2.0

    The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.

    Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]

    But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.

    But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?

    "Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
    “Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

    Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?

    The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?

    One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.

    The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.

    Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.

    Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.

    While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.

    Singulatarianism

    Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.

    Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.

    Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.

    According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.

    While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.

    It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.

    With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ).  Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.

    Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.

    Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.

    While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.

    The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.

    Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.

    Daemons in the Digital Ether

    Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.

    Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?

    Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration

    The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.

    The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.

    Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.

    With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.

    When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.

    On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.

    Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.

    Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.

    But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:

    Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)

    For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”

    Amen

    The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.

    But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.

    The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.

    But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.

    "Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
    “Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

    Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.

    Amen.

    _____

    tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.

    Back to the essay

    _____

    Notes

    [1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.

    [2] https://www.w3.org/RDF/.

    [3] Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions

    [4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.

    [5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.

    [6] We used to call them “statisticians.”

    [7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.

    _____

    Works Cited

  • Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    a review of Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (PublicAffairs, 2018)

    by Zachary Loeb

    ~

    There is something rather precious about Google employees, and Internet users, who earnestly believe the “don’t be evil” line. Though those three words have often been taken to represent a sort of ethos, their primary function is as a steam vent – providing a useful way to allow building pressure to escape before it can become explosive. While “don’t be evil” is associated with Google, most of the giants of Silicon Valley have their own variations of this comforting ideological façade: Apple’s “think different,” Facebook’s talk of “connecting the world,” the smiles on the side of Amazon boxes. And when a revelation troubles this carefully constructed exterior – when it turns out Google is involved in building military drones, when it turns out that Amazon is making facial recognition software for the police – people react in shock and outrage. How could this company do this?!?

    What these revelations challenge is not simply the mythos surrounding particular tech companies, but the mythos surrounding the tech industry itself. After all, many people have their hopes invested in the belief that these companies are building a better brighter future, and they are naturally taken aback when they are forced to reckon with stories that reveal how these companies are building the types of high-tech dystopias that science fiction has been warning us about for decades. And in this space there are some who seem eager to allow a new myth to take root: one in which the unsettling connections between big tech firms and the military industrial complex is something new. But as Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today” (9).

    Thus, cases of Google building military drones, Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.

    Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).

    Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.

    While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side. ARPANET, the famous forerunner of the Internet, was developed to connect computer centers at a variety of prominent universities. Reliant on Interface Message Processors (IMPs) the system routed messages through the network through a variety of nodes and in the case that one node went down the system would reroute the message through other nodes – it was a system of relaying information built to withstand a nuclear war.

    Though all manner of utopian myths surround the early Internet, and by extension its forerunner, Levine highlights that “surveillance was baked in from the very beginning” (75). Case in point, the largely forgotten CONUS Intel program that gathered information on millions of Americans. By encoding this information on IBM punch cards, which were then fed into a computer, law enforcement groups and the army were able to access information not only regarding criminal activity, but activities protected by the first amendment. As news of these databases reached the public they generated fears of a high-tech surveillance society, leading some Senators, such as Sam Ervin, to push back against the program. And in a foreshadowing of further things to come, “the army promised to destroy the surveillance files, but the Senate could not obtain definitive proof that the files were ever fully expunged,” (87). Though there were concerns about the surveillance potential of ARPANET, its growing power was hardly checked, and more government agencies began building their own subnetworks (PRNET, SATNET). Yet, as they relied on different protocols, these networks could not connect to each other, until TCP/IP “the same basic network language that powers the Internet today” (95), allowed them to do so.

    Yet surveillance of citizens, and public pushback against computerized control, is not the grand origin story that most people are familiar with when it comes to the Internet. Instead the story that gets told is one whereby a military technology is filtered through the sieve of a very selective segment of the 1960s counterculture to allow it to emerge with some rebellious credibility. This view, owing much to Stewart Brand, transformed the nascent Internet from a military technology into a technology for everybody “that just happened to be run by the Pentagon” (106). Brand played a prominent and public role in rebranding the computer, as well as those working on the computers – turning these cold calculating machines into doors to utopia, and portraying computer programmers and entrepreneurs as the real heroes of the counterculture. In the process the military nature of these machines disappeared behind a tie-dyed shirt, and the fears of a surveillance society were displaced by hip promises of total freedom. The government links to the network were further hidden as ARPANET slowly morphed into the privatized commercial system we know as the Internet. It may seem mind boggling that the Internet was simply given away with “no real public debate, no discussion, no dissension, and no oversight” (121), but it is worth remembering that this was not the Internet we know. Rather it was how the myth of the Internet we know was built. A myth that combined, as was best demonstrated by Wired magazine, “an unquestioning belief in the ultimate goodness and rightness of markets and decentralized computer technology, no matter how it was used” (133).

    The shift from ARPANET to the early Internet to the Internet of today presents a steadily unfolding tale wherein the result is that, today, “the Internet is like a giant, unseen blob that engulfs the modern world” (169). And in terms of this “engulfing” it is difficult to not think of a handful of giant tech companies (Amazon, Facebook, Apple, eBay, Google) who are responsible for much of that. In the present Internet atmosphere people have become largely inured to the almost clichéd canard that “if you’re not paying, you are the product,” but what this represents is how people have, largely, come to accept that the Internet is one big surveillance machine. Of course, feeding information to the giants made a sort of sense, many people (at least early on) seem to have been genuinely taken in by Google’s “Don’t Be Evil” image, and they saw themselves as the beneficiaries of the fact that “the more Google knew about someone, the better its search results would be” (150). The key insight that firms like Google seem to have understood is that a lot can be learned about a person based on what they do online (especially when they think no one is watching) – what people search for, what sites people visit, what people buy. And most importantly, what these companies understand is that “everything that people do online leaves a trail of data” (169), and controlling that data is power. These companies “know us intimately, even the things that we hide from those closest to us” (171). ARPANET found itself embroiled in a major scandal, at its time, when it was revealed how it was being used to gather information on and monitor regular people going about their lives – and it may well be that “in a lot of ways” the Internet “hasn’t changed much from its ARPANET days. It’s just gotten more powerful” (168).

    But even as people have come to gradually accept, by their actions if not necessarily by their beliefs, that the Internet is one big surveillance machine – periodically events still puncture this complacency. Case in point: Edward Snowden’s revelations about the NSA which splashed the scale of Internet assisted surveillance across the front pages of the world’s newspapers. Reporting linked to the documents Snowden leaked revealed how “the NSA had turned Silicon Valley’s globe-spanning platforms into a de facto intelligence collection apparatus” (193), and these documents exposed “the symbiotic relationship between Silicon Valley and the US government” (194). And yet, in the ensuing brouhaha, Silicon Valley was largely able to paint itself as the victim. Levine attributes some of this to Snowden’s own libertarian political bent, as he became a cult hero amongst technophiles, cypher-punks, and Internet advocates, “he swept Silicon Valley’s role in Internet surveillance under the rug” (199), while advancing a libertarian belief in “the utopian promise of computer networks” (200) similar to that professed by Steward Brand. In many ways Snowden appeared as the perfect heir apparent to the early techno-libertarians, especially as he (like them) focused less on mass political action and instead more on doubling-down on the idea that salvation would come through technology. And Snowden’s technology of choice was Tor.

    While Tor may project itself as a solution to surveillance, and be touted as such by many of its staunchest advocates, Levine casts doubt on this. Noting that, “Tor works only if people are dedicated to maintaining a strict anonymous Internet routine,” one consisting of dummy e-mail accounts and all transactions carried out in Bitcoin, Levine suggests that what Tor offers is “a false sense of privacy” (213). Levine describes the roots of Tor in an original need to provide government operatives with an ability to access the Internet, in the field, without revealing their true identities; and in order for Tor to be effective (and not simply signal that all of its users are spies and soldiers) the platform needed to expand its user base: “Tor was like a public square—the bigger and more diverse the group assembled there, the better spies could hide in the crowd” (227).

    Though Tor had spun off as an independent non-profit, it remained reliant for much of its funding on the US government, a matter which Tor aimed to downplay through emphasizing its radical activist user base and by forming close working connections with organizations like WikiLeaks that often ran afoul of the US government. And in the figure of Snowden, Tor found a perfect public advocate, who seemed to be living proof of Tor’s power – after all, he had used it successfully. Yet, as the case of Ross Ulbricht (the “Dread Pirate Roberts” of Silk Road notoriety) demonstrated, Tor may not be as impervious as it seems – researchers at Carnegie Mellon University “had figured out a cheap and easy way to crack Tor’s super-secure network” (263). To further complicate matters Tor had come to be seen by the NSA “as a honeypot,” to the NSA “people with something to hide” were the ones using Tor and simply by using it they were “helping to mark themselves for further surveillance” (265). And much of the same story seems to be true for the encrypted messaging service Signal (it is government funded, and less secure than its fans like to believe). While these tools may be useful to highly technically literate individuals committed to maintaining constant anonymity, “for the average users, these tools provided a false sense of security and offered the opposite of privacy” (267).

    The central myth of the Internet frames it as an anarchic utopia built by optimistic hippies hoping to save the world from intrusive governments through high-tech tools. Yet, as Surveillance Valley documents, “computer technology can’t be separated from the culture in which it is developed and used” (273). Surveillance is at the core of, and has always been at the core of, the Internet – whether the all-seeing eye be that of the government agency, or the corporation. And this is a problem that, alas, won’t be solved by crypto-fixes that present technological solutions to political problems. The libertarian ethos that undergirds the Internet works well for tech giants and cypher-punks, but a real alternative is not a set of tools that allow a small technically literate gaggle to play in the shadows, but a genuine democratization of the Internet.

     

    *

     

    Surveillance Valley is not interested in making friends.

    It is an unsparing look at the origins of, and the current state of, the Internet. And it is a book that has little interest in helping to prop up the popular myths that sustain the utopian image of the Internet. It is a book that should be read by anyone who was outraged by the Facebook/Cambridge Analytica scandal, anyone who feels uncomfortable about Google building drones or Amazon building facial recognition software, and frankly by anyone who uses the Internet. At the very least, after reading Surveillance Valley many of those aforementioned situations seem far less surprising. While there are no shortage of books, many of them quite excellent, that argue that steps need to be taken to create “the Internet we want,” in Surveillance Valley Yasha Levine takes a step back and insists “first we need to really understand what the Internet really is.” And it is not as simple as merely saying “Google is bad.”

    While much of the history that Levine unpacks won’t be new to historians of technology, or those well versed in critiques of technology, Surveillance Valley brings many, often separate strands into one narrative. Too often the early history of computing and the Internet is placed in one silo, while the rise of the tech giants is placed in another – by bringing them together, Levine is able to show the continuities and allow them to be understood more fully. What is particularly noteworthy in Levine’s account is his emphasis on early pushback to ARPANET, an often forgotten series of occurrences that certainly deserves a book of its own. Levine describes students in the 1960s who saw in early ARPANET projects “a networked system of surveillance, political control, and military conquest being quietly assembled by diligent researchers and engineers at college campuses around the country,” and as Levine provocatively adds, “the college kids had a point” (64). Similarly, Levine highlights NBC reporting from 1975 on the CIA and NSA spying on Americans by utilizing ARPANET, and on the efforts of Senators to rein in these projects. Though Levine is not presenting, nor is he claiming to present, a comprehensive history of pushback and resistance, his account makes it clear that liberatory claims regarding technology were often met with skepticism. And much of that skepticism proved to be highly prescient.

    Yet this history of resistance has largely been forgotten amidst the clever contortions that shifted the Internet’s origins, in the public imagination, from counterinsurgency in Vietnam to the counterculture in California. Though the area of Surveillance Valley that will likely cause the most contention is Levine’s chapters on crypto-tools like Tor and Signal, perhaps his greatest heresy is in his refusal to pay homage to the early tech-evangels like Stewart Brand and Kevin Kelly. While the likes of Brand, and John Perry Barlow, are often celebrated as visionaries whose utopian blueprints have been warped by power-hungry tech firms, Levine is frank in framing such figures as long-haired libertarians who knew how to spin a compelling story in such a way that made empowering massive corporations seem like a radical act. And this is in keeping with one of the major themes that runs, often subtlety, through Surveillance Valley: the substitution of technology for politics. Thus, in his book, Levine does not only frame the Internet as disempowering insofar as it runs on surveillance and relies on massive corporations, but he emphasizes how the ideological core of the Internet focuses all political action on technology. To every social, economic, and political problem the Internet presents itself as the solution – but Levine is unwilling to go along with that idea.

    Those who were familiar with Levine’s journalism before he penned Surveillance Valley will know that much of his reporting has covered crypto-tech, like Tor, and similar privacy technologies. Indeed, to a certain respect, Surveillance Valley can be read as an outgrowth of that reporting. And it is also important to note, as Levine does in the book, that Levine did not make himself many friends in the crypto community by taking on Tor. It is doubtful that cypherpunks will like Surveillance Valley, but it is just as doubtful that they will bother to actually read it and engage with Levine’s argument or the history he lays out. This is a shame, for it would be a mistake to frame Levine’s book as an attack on Tor (or on those who work on the project). Levine’s comments on Tor are in keeping with the thrust of the larger argument of his book: such privacy tools are high-tech solutions to problems created by high-tech society, that mainly serve to keep people hooked into all those high-tech systems. And he questions the politics of Tor, noting that “Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable solution” (268). Or, to put it another way, Tor is kind of like shopping at Whole Foods – people who are concerned about their food are willing to pay a bit more to get their food there, but in the end shopping there lets people feel good about what they’re doing without genuinely challenging the broader system. And, of course, now Whole Foods is owned by Amazon. The most important element of Levine’s critique of Tor is not that it doesn’t work, for some (like Snowden) it clearly does, but that most users do not know how to use it properly (and are unwilling to lead a genuinely full-crypto lifestyle) and so it fails to offer more than a false sense of security.

    Thus, to say it again, Surveillance Valley isn’t particularly interested in making a lot of friends. With one hand it brushes away the comforting myths about the Internet, and with the other it pushes away the tools that are often touted as the solution to many of the Internet’s problems. And in so doing Levine takes on a variety of technoculture’s sainted figures like Stewart Brand, Edward Snowden, and even organizations like the EFF. While Levine clearly doesn’t seem interested in creating new myths, or propping up new heroes, it seems as though he somewhat misses an opportunity here. Levine shows how some groups and individuals had warned about the Internet back when it was still ARPANET, and a greater emphasis on such people could have helped create a better sense of alternatives and paths that were not taken. Levine notes near the book’s end that, “we live in bleak times, and the Internet is a reflection of them: run by spies and powerful corporations just as our society is run by them. But it isn’t all hopeless” (274). Yet it would be easier to believe the “isn’t all hopeless” sentiment, had the book provided more analysis of successful instances of pushback. While it is respectable that Levine puts forward democratic (small d) action as the needed response, this comes as the solution at the end of a lengthy work that has discussed how the Internet has largely eroded democracy. What Levine’s book points to is that it isn’t enough to just talk about democracy, one needs to recognize that some technologies are democratic while others are not. And though we are loathe to admit it, perhaps the Internet (and computers) simply are not democratic technologies. Sure, we may be able to use them for democratic purposes, but that does not make the technologies themselves democratic.

    Surveillance Valley is a troubling book, but it is an important book. It smashes comforting myths and refuses to leave its readers with simple solutions. What it demonstrates in stark relief is that surveillance and unnerving links to the military-industrial complex are not signs that the Internet has gone awry, but signs that the Internet is functioning as intended.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Data and Desire in Academic Life

    Data and Desire in Academic Life

    a review of Erez Aiden and Jean-Baptiste Michel, Uncharted: Big Data as a Lens on Human Culture (Riverhead Books, reprint edition, 2014)
    by Benjamin Haber
    ~

    On a recent visit to San Francisco, I found myself trying to purchase groceries when my credit card was declined. As the cashier is telling me this news, and before I really had time to feel any particular way about it, my leg vibrates. I’ve received a text: “Chase Fraud-Did you use card ending in 1234 for $100.40 at a grocery store on 07/01/2015? If YES reply 1, NO reply 2.” After replying “yes” (which was recognized even though I failed to follow instructions), I swiped my card again and was out the door with my food. Many have probably had a similar experience: most if not all credit card companies automatically track purchases for a variety of reasons, including fraud prevention, the tracking of illegal activity, and to offer tailored financial products and services. As I walked out of the store, for a moment, I felt the power of “big data,” how real-time consumer information can be read as be a predictor of a stolen card in less time than I had to consider why my card had been declined. It was a too rare moment of reflection on those networks of activity that modulate our life chances and capacities, mostly below and above our conscious awareness.

    And then I remembered: didn’t I buy my plane ticket with the points from that very credit card? And in fact, hadn’t I used that card on multiple occasions in San Francisco for purchases not much less than the amount my groceries cost. While the near-instantaneous text provided reassurance before I could consciously recognize my anxiety, the automatic card decline was likely not a sophisticated real-time data-enabled prescience, but a rather blunt instrument, flagging the transaction on the basis of two data points: distance from home and amount of purchase. In fact, there is plenty of evidence to suggest that the gap between data collection and processing, between metadata and content and between current reality of data and its speculative future is still quite large. While Target’s pregnancy predicting algorithm was a journalistic sensation, the more mundane computational confusion that has Gmail constantly serving me advertisements for trade and business schools shows the striking gap between the possibilities of what is collected and the current landscape of computationally prodded behavior. The text from Chase, your Klout score, the vibration of your FitBit, or the probabilistic genetic information from 23 and me are all primarily affective investments in mobilizing a desire for data’s future promise. These companies and others are opening of new ground for discourse via affect, creating networked infrastructures for modulating the body and social life.

    I was thinking about this while reading Uncharted: Big Data as a Lens on Human Culture, a love letter to the power and utility of algorithmic processing of the words in books. Though ostensibly about the Google Ngram Viewer, a neat if one-dimensional tool to visualize the word frequency of a portion of the books scanned by Google, Uncharted is also unquestionably involved in the mobilization of desire for quantification. Though about the academy rather than financialization, medicine, sports or any other field being “revolutionized” by big data, its breathless boosterism and obligatory cautions are emblematic of the emergent datafied spirit of capitalism, a celebratory “coming out” of the quantifying systems that constitute the emergent infrastructures of sociality.

    While published fairly recently, in 2013, Uncharted already feels dated in its strangely muted engagement with the variety of serious objections to sprawling corporate and state run data systems in the post-Snowden, post-Target, post-Ashley Madison era (a list that will always be in need of update). There is still the dazzlement about the sheer magnificent size of this potential new suitor—“If you wrote out all five zettabytes that humans produce every year by hand, you would reach the core of the Milky Way” (11)—all the more impressive when explicitly compared to the dusty old technologies of ink and paper. Authors Erez Aiden and Jean-Baptiste Michel are floating in a world of “simple and beautiful” formulas (45), “strange, fascinating and addictive” methods (22), producing “intriguing, perplexing and even fun” conclusions (119) in their drive to colonize the “uncharted continent” (76) that is the English language. The almost erotic desire for this bounty is made more explicit in their tongue-in-cheek characterization of their meetings with Google employees as an “irresistible… mating dance” (22):

    Scholars and scientists approach engineers, product managers, and even high-level executives about getting access to their companies’ data. Sometimes the initial conversation goes well. They go out for coffee. One thing leads to another, and a year later, a brand-new person enters the picture. Unfortunately this person is usually a lawyer. (22)

    There is a lot to unpack in these metaphors, the recasting of academic dependence on data systems designed and controlled by corporate entities as a sexy new opportunity for scholars and scientists. There are important conversations to be had about these circulations of quantified desire; about who gets access to this kind of data, the ethics of working with companies who have an existential interest in profit and shareholder return and the cultural significance of wrapping business transactions in the language of heterosexual coupling. Here however I am mostly interested in the real allure that this passage and others speaks to, and the attendant fear that mostly whispers, at least in a book written by Harvard PhDs with Ted talks to give.

    For most academics in the social sciences and the humanities “big data” is a term more likely to get caught in the throat than inspire butterflies in the stomach. While Aiden and Michel certainly acknowledge that old-fashion textual analysis (50) and theory (20) will have a place in this brave new world of charts and numbers, they provide a number of contrasts to suggest the relative poverty of even the most brilliant scholar in the face of big data. One hypothetical in particular, that is not directly answered but is strongly implied, spoke to my discipline specifically:

    Consider the following question: Which would help you more if your quest was to learn about contemporary human society—unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online? (12)

    The existential threat at the heart of this question was catalyzed for many people in Roger Burrows and Mike Savage’s 2007 “The Coming Crisis of Empirical Sociology,” an early canary singing the worry of what Nigel Thrift has called “knowing capitalism” (2005). Knowing capitalism speaks to the ways that capitalism has begun to take seriously the task of “thinking the everyday” (1) by embedding information technologies within “circuits of practice” (5). For Burrows and Savage these practices can and should be seen as a largely unrecognized world of sophisticated and profit-minded sociology that makes the quantitative tools of academics look like “a very poor instrument” in comparison (2007: 891).

    Indeed, as Burrows and Savage note, the now ubiquitous social survey is a technology invented by social scientists, folks who were once seen as strikingly innovative methodologists (888). Despite ever more sophisticated statistical treatments however, the now over 40 year old social survey remains the heart of social scientific quantitative methodology in a radically changed context. And while declining response rates, a constraining nation-based framing and competition from privately-funded surveys have all decreased the efficacy of academic survey research (890), nothing has threatened the discipline like the embedded and “passive” collecting technologies that fuel big data. And with these methodological changes come profound epistemological ones: questions of how, when, why and what we know of the world. These methods are inspiring changing ideas of generalizability and new expectations around the temporality of research. Does it matter, for example, that studies have questioned the accuracy of the FitBit? The growing popularity of these devices suggests at the very least that sociologists should not count on empirical rigor to save them from irrelevance.

    As academia reorganizes around the speculative potential of digital technologies, there is an increasing pile of capital available to those academics able to translate between the discourses of data capitalism and a variety of disciplinary traditions. And the lure of this capital is perhaps strongest in the humanities, whose scholars have been disproportionately affected by state economic retrenchment on education spending that has increasingly prioritized quantitative, instrumental, and skill-based majors. The increasing urgency in the humanities to use bigger and faster tools is reflected in the surprisingly minimal hand wringing over the politics of working with companies like Facebook, Twitter and Google. If there is trepidation in the N-grams project recounted in Uncharted, it is mostly coming from Google, whose lawyers and engineers have little incentive to bother themselves with the politically fraught, theory-driven, Institutional Review Board slow lane of academic production. The power imbalance of this courtship leaves those academics who decide to partner with these companies at the mercy of their epistemological priorities and, as Uncharted demonstrates, the cultural aesthetics of corporate tech.

    This is a vision of the public humanities refracted through the language of public relations and the “measurable outcomes” culture of the American technology industry. Uncharted has taken to heart the power of (re)branding to change the valence of your work: Aiden and Michel would like you to call their big data inflected historical research “culturomics” (22). In addition to a hopeful attempt to coin a buzzy new work about the digital, culturomics linguistically brings the humanities closer to the supposed precision, determination and quantifiability of economics. And lest you think this multivalent bringing of culture to capital—or rather the renegotiation of “the relationship between commerce and the ivory tower” (8)—is unseemly, Aiden and Michel provide an origin story to show how futile this separation has been.

    But the desire for written records has always accompanied economic activity, since transactions are meaningless unless you can clearly keep track of who owns what. As such, early human writing is dominated by wheeling and dealing: a menagerie of bets, chits, and contracts. Long before we had the writings of prophets, we had the writing of profits. (9)

    And no doubt this is true: culture is always already bound up with economy. But the full-throated embrace of culturomics is not a vision of interrogating and reimagining the relationship between economic systems, culture and everyday life; [1] rather it signals the acceptance of the idea of culture as transactional business model. While Google has long imagined itself as a company with a social mission, they are a publicly held company who will be punished by investors if they neglect their bottom line of increasing the engagement of eyeballs on advertisements. The N-gram Viewer does not make Google money, but it perhaps increases public support for their larger book-scanning initiative, which Google clearly sees as a valuable enough project to invest many years of labor and millions of dollars to defend in court.

    This vision of the humanities is transactionary in another way as well. While much of Uncharted is an attempt to demonstrate the profound, game-changing implications of the N-gram viewer, there is a distinctly small-questions, cocktail-party-conversation feel to this type of inquiry that seems ironically most useful in preparing ABD humanities and social science PhDs for jobs in the service industry than in training them for the future of academia. It might be more precise to say that the N-gram viewer is architecturally designed for small answers rather than small questions. All is resolved through linear projection, a winner and a loser or stasis. This is a vision of research where the precise nature of the mediation (what books have been excluded? what is the effect of treating all books as equally revealing of human culture? what about those humans whose voices have been systematically excluded from the written record?) is ignored, and where the actual analysis of books, and indeed the books themselves, are black-boxed from the researcher.

    Uncharted speaks to perils of doing research under the cloud of existential erasure and to the failure of academics to lead with a different vision of the possibilities of quantification. Collaborating with the wealthy corporate titans of data collection requires an acceptance of these companies own existential mandate: make tons of money by monetizing a dizzying array of human activities while speculatively reimagining the future to attempt to maintain that cash flow. For Google, this is a vision where all activities, not just “googling” are collected and analyzed in a seamlessly updating centralized system. Cars, thermostats, video games, photos, businesses are integrated not for the public benefit but because of the power of scale to sell or rent or advertise products. Data is promised as a deterministic balm for the unknowability of life and Google’s participation in academic research gives them the credibility to be your corporate (sen.se) mother. What, might we imagine, are the speculative possibilities of networked data not beholden to shareholder value?
    _____

    Benjamin Haber is a PhD candidate in Sociology at CUNY Graduate Center and a Digital Fellow at The Center for the Humanities. His current research is a cultural and material exploration of emergent infrastructures of corporeal data through a queer theoretical framework. He is organizing a conference called “Queer Circuits in Archival Times: Experimentation and Critique of Networked Data” to be held in New York City in May 2016.

    Back to the essay

    _____

    Notes

    [1] A project desperately needed in academia, where terms like “neoliberalism,” “biopolitics” and “late capitalism” more often than not are used briefly at end of a short section on implications rather than being given the critical attention and nuanced intentionality that they deserve.

    Works Cited

    Savage, Mike, and Roger Burrows. 2007. “The Coming Crisis of Empirical Sociology.” Sociology 41 (5): 885–99.

    Thrift, Nigel. 2005. Knowing Capitalism. London: SAGE.

  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography
  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay