boundary 2

Tag: power

  • Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    Richard Hill — Multistakeholder Internet Governance Still Doesn’t Live Up to Its PR (Review of Palladino and Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance)

    a review of Nicola Palladino and Mauro Santaniello, Legitimacy, Power, and Inequalities in the Multistakeholder Internet Governance: Analyzing IANA Transition (Palgrave MacMillan, 2020)

    by Richard Hill

    ~

    While multistakeholder processes have long existed (see the Annex of this submission to an ITU group), they have recently been promoted as a better alternative to traditional governance mechanisms, in particular at the international level; and Internet governance has been put forward as an example of how multistakeholder processes work well, and better than traditional governmental processes. Thus it is very appropriate that a detailed analysis be made of a recent, highly visible, allegedly multistakeholder process: the process by which the US government relinquished its formal control over the administration of Internet names and address. That process was labelled the “IANA transition.”

    The authors are researchers at, respectively, the School of law and Governance, Dublin City University; and the Internet & Communication Policy Center, Department of Political and Social Studies, University of Salerno, Italy. They have taken part in several national and international research projects on Internet Governance, Internet Policy and Digital Constitutionalism processes. They have methodically examined various aspects of the IANA (Internet Assigned Numbers Authority) transition, and collected and analysed an impressive body of data regarding who actually participated in, and influenced, the transition process. Their research confirms what others have stated, namely that the process was dominated by insiders with vested interests, that the outcome did not resolve long-standing political issues, and that the process cannot by any means be seen as an example of an ideal multistakeholder process, and this despite claims to the contrary by the architects of the IANA transition.

    As the authors put the matter: “For those who believe that the IANA is a business concerning exclusively or primarily ICANN [Internet Corporations for Assigned Names and Numbers], the IETF [Internet Engineering Task Force], the NRO [Numbering Resource Organization], and their respective communities, the IANA transition process could be considered inclusive and fair enough, and its outcome effectively transferring the stewardship over IANA functions to the global stakeholder’s community of reference. For those who believe that the IANA stakeholders extend far beyond the organizations mentioned above, the assessment can only have a negative result” (146). Because “in the end, rather than transferring the stewardship of IANA functions to a new multistakeholder body that controls the IANA operator (ICANN), the transition process allowed the ICANN multistakeholder community to perform the oversight role that once belonged to the NTIA [the US government]” (146). Indeed “in the end, the novel governance arrangements strengthened the position of the registries and the technical community” (148). And the US government could still exercise ultimate control, because “ICANN, the PTI [Post-Transition IANA], and most of the root server organizations remain on US territory, and therefore under US jurisdiction” (149).

    That is, the transition failed to address the key political issue: “the IANA functions are at the heart of the DNS [Domain Name System] and the Internet as we know it. Thus, their governance and performance affect a vast range of actors [other than the technical and business communities involved in the operation of the DNS] that should be considered legitimate stakeholders” (147). Instead, it was one more example of “the rhetorical use of the multistakeholder discourse. In particular, … through a neoliberal discourse, the key organizations already involved in the DNS regime were able to use the ambiguity of the concept of a ‘global multistakeholder community’ as a strategic power resource.” Thus failing fully to ensure that discussions “take place through an open process with the participation of all stakeholders extending beyond the ICANN community.” While the call for participation in the process was formally open “its addressees were already identified as specific organizations. It is worth noting that these organizations did not involve external actors in the set-up phase. Rather, they only allowed other interested parties to take part in the discussion according to their rules and with minor participatory rights [speaking, but non-voting, observers]” (148).

    Thus, the authors’ “analysis suggests that the transition did not result in, nor did it lead to, a higher form of multistakeholderism filling the gap between reality and the ideal-type of what multistakeholderism ought to be, according to normative standards of legitimacy. Nor was it able to fix the well-known limitations in inclusiveness, fairness of the decision-making process, and accountability of the entire DNS regime. … Instead, the transition seems to have solidified previous dominant positions and ratified the ownership of an essential public function by a private corporation, led by interwoven economic and technical interests” (149). In particular, “the transition process showed the irrelevance of civil society, little and badly represented in the stakeholder structure before and after the transition” (150). And “multistakeholderism [in this case] seems to have resulted in misleading rhetoric legitimizing power asymmetries embedded within the institutional design of DNS management, rather than in a new governance model capable of ensuring the meaningful participation of all the interested parties.”

    In summary, the IANA transition is one more example of the failure of multistakeholder processes to achieve their desired goal. As the authors correctly note: “Initiatives supposed to be multistakeholder have often been criticized for not complying with their premises, resulting in ‘de-politicization mechanisms that limit political expression and struggle’” (153). Indeed, “While multistakeholderism is used as a rhetoric to solidify and legitimize power positions within some policy-making arena, without any mechanisms giving up power to weaker stakeholders and without making concrete efforts to include different discourses, it will continue to produce ambiguous compromises without decisions, or make decisions affected by a poor degree of pluralism” (153). As others have stated, “‘multistakeholderism reinforces existing power dynamics that have been ‘baked in’ to the model from the beginning. It privileges north-western governments, particularly the US, as well as the US private sector.’ Similarly, … multistakeholderism [can be defined] as a discursive tool employed to create consensus around the hegemony of a power élite” (12). As the authors starkly put the matter, “multistakeholder discourse could result in misleading rhetoric that solidifies power asymmetries and masks domination, manipulation, and hegemonic practices” (26). In particular because “election and engagement procedures often tend to favor an already like-minded set of collective and individual actors even if they belong to different stakeholder categories” (30).

    The above conclusions are supported by detailed, well referenced, descriptions and analyses. Chapters One and Two explain the basic context of the IANA transition, Internet governance and their relation to multistakeholder processes. Chapter One “points out how multistakeholderism is a fuzzy concept that has led to ambiguous practices and disappointing results. Further, it highlights the discursive and legitimizing nature of multistakeholderism, which can serve both as a performing narrative capable of democratizing the Internet governance domain, as well as a misleading rhetoric solidifying the dominant position of the most powerful actors in different Internet policy-making arenas” (1). It traces the history of multistakeholder governance in the Internet context, which started in 2003 (however, a broader historical context would have been useful, see the Annex of this submission to an ITU group). It discusses the conflict between developed and developing countries regarding the management and administration of domain names and addresses that dominated the discussions at the World Summit on the Information Society (WSIS) (Mueller’s Networks and States gives a more detailed account, explaining how development issues – which were supposed to be the focus of the WSIS – got pushed aside, thus resulting in the focus on Internet governance). As the authors correctly state, “the outcomes of the WSIS left the tensions surrounding Internet governance unresolved, giving rise to contestation in subsequent years and to the cyclical recurrence of political conflicts challenging the consensus around the multistakeholder model” (5). The IANA transition was seen as a way of resolving these tensions, but it relied “on the conflation of the multistakeholder approach with the privatization of Internet governance” (8).

    As the authors posit (citing well-know scholar Hoffmann, “multistakeholderism is a narrative based on three main promises: the promise of achieving global representation on an issue putting together all the affected parties; the promise of overcoming the traditional democratic deficit at the transnational level, ‘establishing communities of interest as a digitally enabled equivalent to territorial constituencies’; and the promise of higher and enforced outcomes since incorporating global views on the matter through a consensual approach should ensure more complete solutions and their smooth implementation” (10).

    Chapter Three provides a thorough introduction to the management of Internet domain names and address and of the issues related to it and to the IANA function, in particular the role of the US government and of US academic and business organizations; the seminal work of the Internet Ad Hoc Group (IAHC); the creation and evolution of ICANN; and various criticism of ICANN, in particular regarding its accountability. (The chapter inexplicably fails to mention the key role of Mocakpetris in the creation of the DNS).

    Chapter Four describes the institutional setup of the IANA transition, and the constraints unilaterally imposed by the US government (see also 104) and the various parties that dominate discussions of the issues involved. As the authors note, the call for the creation of the key group went out “without having before voted on the proposed scheme [of the group], neither within the ICANN community nor outside through a further round of public comments” (67). The structure of that group heavily influenced the discussions and the outcome.

    Chapter Five evaluates the IANA transition in terms of one of three types of legitimacy: input legitimacy, that is whether all affected parties could meaningfully participate in the process (the other two types of legitimacy are discussed in subsequent chapters, see below). By analysing in detail the profiles and affiliations of the participants with decision-making power, the authors find that “a vast majority (56) of the people who have taken part in the drafting of the IANA transition proposal are bearers of technical and operative interests” (87); “Regarding nationality, Western countries appear to be over-represented within the drafting and decisional organism involved in the IANA transition process. In particular, US citizens constitute the most remarkable group, occupying 20 seats over 90 available” (89); and  “IANA transition voting members experienced multiple and trans-sectoral affiliations, blurring the boundaries among stakeholder categories” (151). In summary “the results of this stakeholder analysis seem to indicate that the adopted categorization and appointment procedures have reproduced within the IANA transition process well-known power relationships and imbalances already existing in the DNS management, overrepresenting Western, technical, and business interests while marginalizing developing countries and civil society participation” (90).

    Chapter Six evaluates the transition with respect to process legitimacy: whether all participants could meaningfully affect the outcome. As the authors correctly note, “Stakeholders not belonging to the organizations at the core of the operational communities were called to join the process according to rules and procedures that they had not contributed to creating, and with minor participatory rights” (107). The decision-making process was complex, and undermined the inputs from weaker parties – thus funded, dedicated participants were more influential. Further, key participants were concerned about how the US government would view the outcome, and whether it would approve it (116). And discussions appear to have been restricted to a neo-liberal framework and technical framework (120, 121). As the authors state: “Ultimately, this narrow technical frame prevented the acknowledgment of the public good nature of the IANA functions, and, even more, of their essence as public policy issues” (121). Further, “most members and participants at the CWG-Stewardship had been socialized to the ICANN system, belonging to one of its structures or attending its meetings” and “the long-standing neoliberal plan of the US government and the NTIA to ‘privatize’ the DNS placed the IANA transition within a precise system of definitions, concepts, references, and assumptions that constrained the development of alternative policy discourses and limited the political action of sovereignist and constitutional coalitions” (122).

    Thus, it is not surprising that the authors find that “a single discourse shaped the deliberation. These results contradict the assumptions at the basis of the multistakeholder model of governance, which is supposed to reach a higher and more complete understanding of a particular matter through deliberation among different categories of actors, with different backgrounds, views, and perspectives. Instead, the set of IANA transition voting members in many regards resembled what has been defined as a ‘club governance’ model, which refers to an ‘elite community where the members are motivated by peer recognition and a common goal in line with values, they consider honourable’” (151).

    Chapter Seven evaluates the transition with respect to output legitimacy: whether the result achieved its goals of transferring oversight of the IANA function to a global multistakeholder community. As the authors state “ the institutional effectiveness of the IANA transition cannot be evaluated as satisfying from a normative point of view in terms of inclusiveness, balanced representation, and accountability. As a consequence, the ICANN board remains the expression of interwoven business and technical interests and is unlikely to be truly constrained by an independent entity” (135). Further, as shown in detail, “the political problems connected to the IANA functions have been left unresolved, …  it did not take a long time before they re-emerged” (153).

    Indeed, “IANA was, first of all, a political matter. Indeed, the transition was settled as a consequence of a political fact – the widespread loss of trust in the USA as the caretaker of the Internet after the Snowden disclosures. Further, the IANA transition process aimed to achieve eminently political goals, such as establishing a novel governance setting and strengthening the DNS’s accountability and legitimacy” (152). However, as the authors explain in detail, the IANA transition was turned into a technical discussion, and “The problem here is that governance settings, such as those described as club governance, base their legitimacy form professional expertise and reputation. They are well-suited to performing some form of ‘technocratic’ governance, addressing an issue with a problem-solving approach based on an already given understanding of the nature of the problem and of the goals to be reached. Sharing a set of overlapping and compatible views is the cue that puts together these networks of experts. Nevertheless, they are ill-suited for tackling political problems, which, by definition, deal with pluralism” (152).

    Chapter Seven could have benefitted from a discussion of ICANN’s new Independent Review Process, and the length of time it has taken to put into place the process to name the panellists.

    Chapter Eight, already summarized above, presents overall conclusions.

    In summary, this is a timely and important book that provides objective data and analyses of a particular process that has been put forward as a model for multistakeholder governance, which itself has been put forth as a better alternative to conventional governance. While there is no doubt that ICANN, and the IANA function, are performing their intended functions, the book shows that the IANA transition was not a model multistakeholder process: on the contrary, it exhibited many of the well-known flaws of multistakeholder processes. Thus it should not be used as a model for future governance.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay

  • Something About the Digital

    Something About the Digital

    By Alexander R. Galloway
    ~

    (This catalog essay was written in 2011 for the exhibition “Chaos as Usual,” curated by Hanne Mugaas at the Bergen Kunsthall in Norway. Artists in the exhibition included Philip Kwame Apagya, Ann Craven, Liz Deschenes, Thomas Julier [in collaboration with Cédric Eisenring and Kaspar Mueller], Olia Lialina and Dragan Espenschied, Takeshi Murata, Seth Price, and Antek Walczak.)

    There is something about the digital. Most people aren’t quite sure what it is. Or what they feel about it. But something.

    In 2001 Lev Manovich said it was a language. For Steven Shaviro, the issue is being connected. Others talk about “cyber” this and “cyber” that. Is the Internet about the search (John Battelle)? Or is it rather, even more primordially, about the information (James Gleick)? Whatever it is, something is afoot.

    What is this something? Given the times in which we live, it is ironic that this term is so rarely defined and even more rarely defined correctly. But the definition is simple: the digital means the one divides into two.

    Digital doesn’t mean machine. It doesn’t mean virtual reality. It doesn’t even mean the computer – there are analog computers after all, like grandfather clocks or slide rules. Digital means the digits: the fingers and toes. And since most of us have a discrete number of fingers and toes, the digital has come to mean, by extension, any mode of representation rooted in individually separate and distinct units. So the natural numbers (1, 2, 3, …) are aptly labeled “digital” because they are separate and distinct, but the arc of a bird in flight is not because it is smooth and continuous. A reel of celluloid film is correctly called “digital” because it contains distinct breaks between each frame, but the photographic frames themselves are not because they record continuously variable chromatic intensities.

    We must stop believing the myth, then, about the digital future versus the analog past. For the digital died its first death in the continuous calculus of Newton and Leibniz, and the curvilinear revolution of the Baroque that came with it. And the digital has suffered a thousand blows since, from the swirling vortexes of nineteenth-century thermodynamics, to the chaos theory of recent decades. The switch from analog computing to digital computing in the middle twentieth century is but a single battle in the multi-millennial skirmish within western culture between the unary and the binary, proportion and distinction, curves and jumps, integration and division – in short, over when and how the one divides into two.

    What would it mean to say that a work of art divides into two? Or to put it another way, what would art look like if it began to meditate on the one dividing into two? I think this is the only way we can truly begin to think about “digital art.” And because of this we shall leave Photoshop, and iMovie, and the Internet and all the digital tools behind us, because interrogating them will not nearly begin to address these questions. Instead look to Ann Craven’s paintings. Or look to the delightful conversation sparked here between Philip Kwame Apagya and Liz Deschenes. Or look to the work of Thomas Julier, even to a piece of his not included in the show, “Architecture Reflecting in Architecture” (2010, made with Cedric Eisenring), which depicts a rectilinear cityscape reflected inside the mirror skins of skyscrapers, just like Saul Bass’s famous title sequence in North By Northwest (1959).

    DSC_0002__560
    Liz Deschenes, “Green Screen 4” (2001)

    All of these works deal with the question of twoness. But it is twoness only in a very particular sense. This is not the twoness of the doppelganger of the romantic period, or the twoness of the “split mind” of the schizophrenic, and neither is it the twoness of the self/other distinction that so forcefully animated culture and philosophy during the twentieth century, particularly in cultural anthropology and then later in poststructuralism. Rather we see here a twoness of the material, a digitization at the level of the aesthetic regime itself.

    Consider the call and response heard across the works featured here by Apagya and Deschenes. At the most superficial level, one might observe that these are works about superimposition, about compositing. Apagya’s photographs exploit one of the oldest and most useful tricks of picture making: superimpose one layer on top of another layer in order to produce a picture. Painters do this all the time of course, and very early on it became a mainstay of photographic technique (even if it often remained relegated to mere “trick” photography), evident in photomontage, spirit photography, and even the side-by-side compositing techniques of the carte de visite popularized by André-Adolphe-Eugène Disdéri in the 1850s. Recall too that the cinema has made productive use of superimposition, adopting the technique with great facility from the theater and its painted scrims and moving backdrops. (Perhaps the best illustration of this comes at the end of A Night at the Opera [1935], when Harpo Marx goes on a lunatic rampage through the flyloft during the opera’s performance, raising and lowering painted backdrops to great comic effect.) So the more “modern” cinematic techniques of, first, rear screen projection, and then later chromakey (known commonly as the “green screen” or “blue screen” effect), are but a reiteration of the much longer legacy of compositing in image making.

    Deschenes’ “Green Screen #4” points to this broad aesthetic history, as it empties out the content of the image, forcing us to acknowledge the suppressed color itself – in this case green, but any color will work. Hence Deschenes gives us nothing but a pure background, a pure something.

    Allowed to curve gracefully off the wall onto the floor, the green color field resembles the “sweep wall” used commonly in portraiture or fashion photography whenever an artist wishes to erase the lines and shadows of the studio environment. “Green Screen #4” is thus the antithesis of what has remained for many years the signal art work about video chromakey, Peter Campus’ “Three Transitions” (1973). Whereas Campus attempted to draw attention to the visual and spatial paradoxes made possible by chromakey, and even in so doing was forced to hide the effect inside the jittery gaps between images, Deschenes by contrast feels no such anxiety, presenting us with the medium itself, minus any “content” necessary to fuel it, minus the powerful mise en abyme of the Campus video, and so too minus Campus’ mirthless autobiographical staging. If Campus ultimately resolves the relationship between images through a version of montage, Deschenes offers something more like a “divorced digitality” in which no two images are brought into relation at all, only the minimal substrate remains, without input or output.

    The sweep wall is evident too in Apagya’s images, only of a different sort, as the artifice of the various backgrounds – in a nod not so much to fantasy as to kitsch – both fuses with and separates from the foreground subject. Yet what might ultimately unite the works by Apagya and Deschenes is not so much the compositing technique, but a more general reference, albeit oblique but nevertheless crucial, to the fact that such techniques are today entirely quotidian, entirely usual. These are everyday folk techniques through and through. One needs only a web cam and simple software to perform chromakey compositing on a computer, just as one might go to the county fair and have one’s portrait superimposed on the body of a cartoon character.

    What I’m trying to stress here is that there is nothing particularly “technological” about digitality. All that is required is a division from one to two – and by extension from two to three and beyond to the multiple. This is why I see layering as so important, for it spotlights an internal separation within the image. Apagya’s settings are digital, therefore, simply by virtue of the fact that he addresses our eye toward two incompatible aesthetic zones existing within the image. The artifice of a painted backdrop, and the pose of a person in a portrait.

    Certainly the digital computer is “digital” by virtue of being binary, which is to say by virtue of encoding and processing numbers at the lowest levels using base-two mathematics. But that is only the most prosaic and obvious exhibit of its digitality. For the computer is “digital” too in its atomization of the universe, into, for example, a million Facebook profiles, all equally separate and discrete. Or likewise “digital” too in the computer interface itself which splits things irretrievably into cursor and content, window and file, or even, as we see commonly in video games, into heads-up-display and playable world. The one divides into two.

    So when clusters of repetition appear across Ann Craven’s paintings, or the iterative layers of the “copy” of the “reconstruction” in the video here by Thomas Julier and Cédric Eisenring, or the accumulations of images that proliferate in Olia Lialina and Dragon Espenschied’s “Comparative History of Classic Animated GIFs and Glitter Graphics” [2007] (a small snapshot of what they have assembled in their spectacular book from 2009 titled Digital Folklore), or elsewhere in works like Oliver Laric’s clipart videos (“787 Cliparts” [2006] and “2000 Cliparts” [2010]), we should not simply recall the famous meditations on copies and repetitions, from Walter Benjamin in 1936 to Gilles Deleuze in 1968, but also a larger backdrop that evokes the very cleavages emanating from western metaphysics itself from Plato onward. For this same metaphysics of division is always already a digital metaphysics as it forever differentiates between subject and object, Being and being, essence and instance, or original and repetition. It shouldn’t come as a surprise that we see here such vivid aesthetic meditations on that same cleavage, whether or not a computer was involved.

    Another perspective on the same question would be to think about appropriation. There is a common way of talking about Internet art that goes roughly as follows: the beginning of net art in the middle to late 1990s was mostly “modernist” in that it tended to reflect back on the possibilities of the new medium, building an aesthetic from the material affordances of code, screen, browser, and jpeg, just as modernists in painting or literature built their own aesthetic style from a reflection on the specific affordances of line, color, tone, or timbre; whereas the second phase of net art, coinciding with “Web 2.0” technologies like blogging and video sharing sites, is altogether more “postmodern” in that it tends to co-opt existing material into recombinant appropriations and remixes. If something like the “WebStalker” web browser or the Jodi.org homepage are emblematic of the first period, then John Michael Boling’s “Guitar Solo Threeway,” Brody Condon’s “Without Sun,” or the Nasty Nets web surfing club, now sadly defunct, are emblematic of the second period.

    I’m not entirely unsatisfied by such a periodization, even if it tends to confuse as many things as it clarifies – not entirely unsatisfied because it indicates that appropriation too is a technique of digitality. As Martin Heidegger signals, by way of his notoriously enigmatic concept Ereignis, western thought and culture was always a process in which a proper relationship of belonging is established in a world, and so too appropriation establishes new relationships of belonging between objects and their contexts, between artists and materials, and between viewers and works of art. (Such is the definition of appropriation after all: to establish a belonging.) This is what I mean when I say that appropriation is a technique of digitality: it calls out a distinction in the object from “where it was prior” to “where it is now,” simply by removing that object from one context of belonging and separating it out into another. That these two contexts are merely different – that something has changed – is evidence enough of the digitality of appropriation. Even when the act of appropriation does not reduplicate the object or rely on multiple sources, as with the artistic ready-made, it still inaugurates a “twoness” in the appropriated object, an asterisk appended to the art work denoting that something is different.

    TMu_Cyborg_2011_18-1024x682
    Takeshi Murata, “Cyborg” (2011)

    Perhaps this is why Takeshi Murata continues his exploration of the multiplicities at the core of digital aesthetics by returning to that age old format, the still life. Is not the still life itself a kind of appropriation, in that it brings together various objects into a relationship of belonging: fig and fowl in the Dutch masters, or here the various detritus of contemporary cyber culture, from cult films to iPhones?

    Because appropriation brings things together it must grapple with a fundamental question. Whatever is brought together must form a relation. These various things must sit side-by-side with each other. Hence one might speak of any grouping of objects in terms of their “parallel” nature, that is to say, in terms of the way in which they maintain their multiple identities in parallel.

    But let us dwell for a moment longer on these agglomerations of things, and in particular their “parallel” composition. By parallel I mean the way in which digital media tend to segregate and divide art into multiple, separate channels. These parallel channels may be quite manifest, as in the separate video feeds that make up the aforementioned “Guitar Solo Threeway,” or they may issue from the lowest levels of the medium, as when video compression codecs divide the moving image into small blocks of pixels that move and morph semi-autonomously within the frame. In fact I have found it useful to speak of this in terms of the “parallel image” in order to differentiate today’s media making from that of a century ago, which Friedrich Kittler and others have chosen to label “serial” after the serial sequences of the film strip, or the rat-ta-tat-tat of a typewriter.

    Thus films like Tatjana Marusic’s “The Memory of a Landscape” (2004) or Takeshi Murata’s “Monster Movie” (2005) are genuinely digital films, for they show parallelity in inscription. Each individual block in the video compression scheme has its own autonomy and is able to write to the screen in parallel with all the other blocks. These are quite literally, then, “multichannel” videos – we might even take a cue from online gaming circles and label them “massively multichannel” videos. They are multichannel not because they require multiple monitors, but because each individual block or “channel” within the image acts as an individual micro video feed. Each color block is its own channel. Thus, the video compression scheme illustrates, through metonymy, how pixel images work in general, and, as I suggest, it also illustrates the larger currents of digitality, for it shows that these images, in order to create “an” image must first proliferate the division of sub-images, which themselves ultimately coalesce into something resembling a whole. In other words, in order to create a “one” they must first bifurcate the single image source into two or more separate images.

    The digital image is thus a cellular and discrete image, consisting of separate channels multiplexed in tandem or triplicate or, greater, into nine, twelve, twenty-four, one hundred, or indeed into a massively parallel image of a virtually infinite visuality.

    For me this generates a more appealing explanation for why art and culture has, over the last several decades, developed a growing anxiety over copies, repetitions, simulations, appropriations, reenactments – you name it. It is common to attribute such anxiety to a generalized disenchantment permeating modern life: our culture has lost its aura and can no longer discern an original from a copy due to endless proliferations of simulation. Such an assessment is only partially correct. I say only partially because I am skeptical of the romantic nostalgia that often fuels such pronouncements. For who can demonstrate with certainty that the past carried with it a greater sense of aesthetic integrity, a greater unity in art? Yet the assessment begins to adopt a modicum of sense if we consider it from a different point of view, from the perspective of a generalized digitality. For if we define the digital as “the one dividing into two,” then it would be fitting to witness works of art that proliferate these same dualities and multiplicities. In other words, even if there was a “pure” aesthetic origin it was a digital origin to begin with. And thus one needn’t fret over it having infected our so-called contemporary sensibilities.

    Instead it is important not to be blinded by the technology. But rather to determine that, within a generalized digitality, there must be some kind of differential at play. There must be something different, and without such a differential it is impossible to say that something is something (rather than something else, or indeed rather than nothing). The one must divide into something else. Nothing less and nothing more is required, only a generic difference. And this is our first insight into the “something” of the digital.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Network Pessimism

    Network Pessimism

    By Alexander R. Galloway
    ~

    I’ve been thinking a lot about pessimism recently. Eugene Thacker has been deep in this material for some time already. In fact he has a new, lengthy manuscript on pessimism called Infinite Resignation, which is a bit of departure from his other books in terms of tone and structure. I’ve read it and it’s excellent. Definitely “the worst” he’s ever written! Following the style of other treatises from the history of philosophical pessimism–Leopardi, Cioran, Schopenhauer, Kierkegaard, and others–the bulk of the book is written in short aphorisms. It’s very poetic language, and some sections are driven by his own memories and meditations, all in an attempt to plumb the deepest, darkest corners of the worst the universe has to offer.

    Meanwhile, the worst can’t stay hidden. Pessimism has made it to prime time, to NPR, and even right-wing media. Despite all this attention, Eugene seems to have little interest in showing his manuscript to publishers. A true pessimist! Not to worry, I’m sure the book will see the light of day eventually. Or should I say dead of night? When it does, the book is sure to sadden, discourage, and generally worsen the lives of Thacker fans everywhere.

    Interestingly pessimism also appears in a number of other authors and fields. I’m thinking, for instance, of critical race theory and the concept of Afro-pessimism. The work of Fred Moten and Frank B. Wilderson, III is particularly interesting in that regard. Likewise queer theory has often wrestled with pessimism, be it the “no future” debates around reproductive futurity, or what Anna Conlan has simply labeled “homo-pessimism,” that is, the way in which the “persistent association of homosexuality with death and oppression contributes to a negative stereotype of LGBTQ lives as unhappy and unhealthy.”[1]

    In his review of my new book, Andrew Culp made reference to how some of this material has influenced me. I’ll be posting more on Moten and these other themes in the future, but let me here describe, in very general terms, how the concept of pessimism might apply to contemporary digital media.

    *

    A previous post was devoted to the reticular fallacy, defined as the false assumption that the erosion of hierarchical organization leads to an erosion of organization as such. Here I’d like to address the related question of reticular pessimism or, more simply, network pessimism.

    Network pessimism relies on two basic assumptions: (1) “everything is a network”; (2) “the best response to networks is more networks.”

    Who says everything is a network? Everyone, it seems. In philosophy, Bruno Latour: ontology is a network. In literary studies, Franco Moretti: Hamlet is a network. In the military, Donald Rumsfeld: the battlefield is a network. (But so too our enemies are networks: the terror network.) Art, architecture, managerial literature, computer science, neuroscience, and many other fields–all have shifted prominently in recent years toward a network model. Most important, however, is the contemporary economy and the mode of production. Today’s most advanced companies are essentially network companies. Google monetizes the shape of networks (in part via clustering algorithms). Facebook has rewritten subjectivity and social interaction along the lines of canalized and discretized network services. The list goes on and on. Thus I characterize the first assumption — “everything is a network” — as a kind of network fundamentalism. It claims that whatever exists in the world appears naturally in the form of a system, an ecology, an assemblage, in short, as a network.

    Ladies and gentlemen, behold the good news, postmodernism is definitively over! We have a new grand récit. As metanarrative, the network will guide us into a new Dark Age.

    If the first assumption expresses a positive dogma or creed, the second is more negative or nihilistic. The second assumption — that the best response to networks is more networks — is also evident in all manner of social and political life today. Eugene and I described this phenomena at greater length in The Exploit, but consider a few different examples from contemporary debates… In military theory: network-centric warfare is the best response to terror networks. In Deleuzian philosophy: the rhizome is the best response to schizophrenic multiplicity. In autonomist Marxism: the multitude is the best response to empire. In the environmental movement: ecologies and systems are the best response to the systemic colonization of nature. In computer science: distributed architectures are the best response to bottlenecks in connectivity. In economics: heterogenous “economies of scope” are the best response to the distributed nature of the “long tail.”

    To be sure, there are many sites today where networks still confront power centers. The point is not to deny the continuing existence of massified, centralized sovereignty. But at the same time it’s important to contextualize such confrontations within a larger ideological structure, one that inoculates the network form and recasts it as the exclusive site of liberation, deviation, political maturation, complex thinking, and indeed the very living of life itself.

    Why label this a pessimism? For the same reasons that queer theory and critical race theory are grappling with pessimism: Is alterity a death sentence? Is this as good as it gets? Is this all there is? Can we imagine a parallel universe different from this one? (Although the pro-pessimism camp would likely state it in the reverse: We must destabilize and annihilate all normative descriptions of the “good.” This world isn’t good, and hooray for that!)

    So what’s the problem? Why should we be concerned about network pessimism? Let me state clearly so there’s no misunderstanding, pessimism isn’t the problem here. Likewise, networks are not the problem. (Let no one label me “anti network” nor “anti pessimism” — in fact I’m not even sure what either of those positions would mean.) The issue, as I see it, is that network pessimism deploys and sustains a specific dogma, confining both networks and pessimism to a single, narrow ideological position. It’s this narrow-mindedness that should be questioned.

    Specifically I can see three basic problems with network pessimism, the problem of presentism, the problem of ideology, and the problem of the event.

    The problem of presentism refers to the way in which networks and network thinking are, by design, allergic to historicization. This exhibits itself in a number of different ways. Networks arrive on the scene at the proverbial “end of history” (and they do so precisely because they help end this history). Ecological and systems-oriented thinking, while admittedly always temporal by nature, gained popularity as a kind of solution to the problems of diachrony. Space and landscape take the place of time and history. As Fredric Jameson has noted, the “spatial turn” of postmodernity goes hand in hand with a denigration of the “temporal moment” of previous intellectual movements.

    man machines buy fritz kahn
    Fritz Kahn, “Der Mensch als Industriepalast (Man as Industrial Palace)” (Stuttgart, 1926). Image source: NIH

    From Hegel’s history to Luhmann’s systems. From Einstein’s general relativity to Riemann’s complex surfaces. From phenomenology to assemblage theory. From the “time image” of cinema to the “database image” of the internet. From the old mantra always historicize to the new mantra always connect.

    During the age of clockwork, the universe was thought to be a huge mechanism, with the heavens rotating according to the music of the spheres. When the steam engine was the source of newfound power, the world suddenly became a dynamo of untold thermodynamic force. After full-fledged industrialization, the body became a factory. Technologies and infrastructures are seductive metaphors. So it’s no surprise (and no coincidence) that today, in the age of the network, a new template imprints itself on everything in sight. In other words, the assumption “everything is a network” gradually falls apart into a kind of tautology of presentism. “Everything right now is a network…because everything right now has been already defined as a network.”

    This leads to the problem of ideology. Again we’re faced with an existential challenge, because network technologies were largely invented as a non-ideological or extra-ideological structure. When writing Protocol I interviewed some of the computer scientists responsible for the basic internet protocols and most of them reported that they “have no ideology” when designing networks, that they are merely interested in “code that works” and “systems that are efficient and robust.” In sociology and philosophy of science, figures like Bruno Latour routinely describe their work as “post-critical,” merely focused on the direct mechanisms of network organization. Hence ideology as a problem to be forgotten or subsumed: networks are specifically conceived and designed as those things that both are non-ideological in their conception (we just want to “get things done”), but also post-ideological in their architecture (in that they acknowledge and co-opt the very terms of previous ideological debates, things like heterogeneity, difference, agency, and subject formation).

    The problem of the event indicates a crisis for the very concept of events themselves. Here the work of Alain Badiou is invaluable. Network architectures are the perfect instantiation of what Badiou derisively labels “democratic materialism,” that is, a world in which there are “only bodies and languages.” In Badiou’s terms, if networks are the natural state of the situation and there is no way to deviate from nature, then there is no event, and hence no possibility for truth. Networks appear, then, as the consummate “being without event.”

    What could be worse? If networks are designed to accommodate massive levels of contingency — as with the famous Robustness Principle — then they are also exceptionally adept at warding off “uncontrollable” change wherever it might arise. If everything is a network, then there’s no escape, there’s no possibility for the event.

    Jameson writes as much in The Seeds of Time when he says that it is easier to imagine the end of the earth and the end of nature than it is to imagine the ends of capitalism. Network pessimism, in other words, is really a kind of network defeatism in that it makes networks the alpha and omega of our world. It’s easier to imagine the end of that world than it is to discard the network metaphor and imagine a kind of non-world in which networks are no longer dominant.

    In sum, we shouldn’t give in to network pessimism. We shouldn’t subscribe to the strong claim that everything is a network. (Nor should we subscribe to the softer claim, that networks are merely the most common, popular, or natural architecture for today’s world.) Further, we shouldn’t think that networks are the best response to networks. Instead we must ask the hard questions. What is the political fate of networks? Did heterogeneity and systematicity survive the Twentieth Century? If so, at what cost? What would a non-net look like? And does thinking have a future without the network as guide?

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay
    _____

    Notes

    [1] Anna Conlan, “Representing Possibility: Mourning, Memorial, and Queer Museology,” in Gender, Sexuality and Museums, ed. Amy K. Levin (London: Routledge, 2010). 253-263.

  • The Reticular Fallacy

    The Reticular Fallacy

    By Alexander R. Galloway
    ~

    We live in an age of heterogenous anarchism. Contingency is king. Fluidity and flux win over solidity and stasis. Becoming has replaced being. Rhizomes are better than trees. To be political today, one must laud horizontality. Anti-essentialism and anti-foundationalism are the order of the day. Call it “vulgar ’68-ism.” The principles of social upheaval, so associated with the new social movements in and around 1968, have succeed in becoming the very bedrock of society at the new millennium.

    But there’s a flaw in this narrative, or at least a part of the story that strategically remains untold. The “reticular fallacy” can be broken down into two key assumptions. The first is an assumption about the nature of sovereignty and power. The second is an assumption about history and historical change. Consider them both in turn.

    (1) First, under the reticular fallacy, sovereignty and power are defined in terms of verticality, centralization, essence, foundation, or rigid creeds of whatever kind (viz. dogma, be it sacred or secular). Thus the sovereign is the one who is centralized, who stands at the top of a vertical order of command, who rests on an essentialist ideology in order to retain command, who asserts, dogmatically, unchangeable facts about his own essence and the essence of nature. This is the model of kings and queens, but also egos and individuals. It is what Barthes means by author in his influential essay “Death of the Author,” or Foucault in his “What is an Author?” This is the model of the Prince, so often invoked in political theory, or the Father invoked in psycho-analytic theory. In Derrida, the model appears as logos, that is, the special way or order of word, speech, and reason. Likewise, arkhe: a term that means both beginning and command. The arkhe is the thing that begins, and in so doing issues an order or command to guide whatever issues from such a beginning. Or as Rancière so succinctly put it in his Hatred of Democracy, the arkhe is both “commandment and commencement.” These are some of the many aspects of sovereignty and power as defined in terms of verticality, centralization, essence, and foundation.

    (2) The second assumption of the reticular fallacy is that, given the elimination of such dogmatic verticality, there will follow an elimination of sovereignty as such. In other words, if the aforementioned sovereign power should crumble or fall, for whatever reason, the very nature of command and organization will also vanish. Under this second assumption, the structure of sovereignty and the structure of organization become coterminous, superimposed in such a way that the shape of organization assumes the identical shape of sovereignty. Sovereign power is vertical, hence organization is vertical; sovereign power is centralized, hence organization is centralized; sovereign power is essentialist, hence organization, and so on. Here we see the claims of, let’s call it, “naïve” anarchism (the non-arkhe, or non foundation), which assumes that repressive force lies in the hands of the bosses, the rulers, or the hierarchy per se, and thus after the elimination of such hierarchy, life will revert so a more direct form of social interaction. (I say this not to smear anarchism in general, and will often wish to defend a form of anarcho-syndicalism.) At the same time, consider the case of bourgeois liberalism, which asserts the rule of law and constitutional right as a way to mitigate the excesses of both royal fiat and popular caprice.

    reticular connective tissue
    source: imgkid.com

    We name this the “reticular” fallacy because, during the late Twentieth Century and accelerating at the turn of the millennium with new media technologies, the chief agent driving the kind of historical change described in the above two assumptions was the network or rhizome, the structure of horizontal distribution described so well in Deleuze and Guattari. The change is evident in many different corners of society and culture. Consider mass media: the uni-directional broadcast media of the 1920s or ’30s gradually gave way to multi-directional distributed media of the 1990s. Or consider the mode of production, and the shift from a Fordist model rooted in massification, centralization, and standardization, to a post-Fordist model reliant more on horizontality, distribution, and heterogeneous customization. Consider even the changes in theories of the subject, shifting as they have from a more essentialist model of the integral ego, however fraught by the volatility of the unconscious, to an anti-essentialist model of the distributed subject, be it postmodernism’s “schizophrenic” subject or the kind of networked brain described by today’s most advanced medical researchers.

    Why is this a fallacy? What is wrong about the above scenario? The problem isn’t so much with the historical narrative. The problem lies in an unwillingness to derive an alternative form of sovereignty appropriate for the new rhizomatic societies. Opponents of the reticular fallacy claim, in other words, that horizontality, distributed networks, anti-essentialism, etc., have their own forms of organization and control, and indeed should be analyzed accordingly. In the past I’ve used the concept of “protocol” to describe such a scenario as it exists in digital media infrastructure. Others have used different concepts to describe it in different contexts. On the whole, though, opponents of the reticular fallacy have not effectively made their case, myself included. The notion that rhizomatic structures are corrosive of power and sovereignty is still the dominant narrative today, evident across both popular and academic discourses. From talk of the “Twitter revolution” during the Arab Spring, to the ideologies of “disruption” and “flexibility” common in corporate management speak, to the putative egalitarianism of blog-based journalism, to the growing popularity of the Deleuzian and Latourian schools in philosophy and theory: all of these reveal the contemporary assumption that networks are somehow different from sovereignty, organization, and control.

    To summarize, the reticular fallacy refers to the following argument: since power and organization are defined in terms of verticality, centralization, essence, and foundation, the elimination of such things will prompt a general mollification if not elimination of power and organization as such. Such an argument is false because it doesn’t take into account the fact that power and organization may inhabit any number of structural forms. Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.

    Consider the kind of methods and concepts still popular in critical theory today: contingency, heterogeneity, anti-essentialism, anti-foundationalism, anarchism, chaos, plasticity, flux, fluidity, horizontality, flexibility. Such concepts are often praised and deployed in theories of the subject, analyses of society and culture, even descriptions of ontology and metaphysics. The reticular fallacy does not invalidate such concepts. But it does put them in question. We can not assume that such concepts are merely descriptive or neutrally empirical. Given the way in which horizontality, flexibility, and contingency are sewn into the mode of production, such “descriptive” claims are at best mirrors of the economic infrastructure and at worse ideologically suspect. At the same time, we can not simply assume that such concepts are, by nature, politically or ethically desirable in themselves. Rather, we ought to reverse the line of inquiry. The many qualities of rhizomatic systems should be understood not as the pure and innocent laws of a newer and more just society, but as the basic tendencies and conventional rules of protocological control.


    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here earlier in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • The People’s Platform by Astra Taylor

    The People’s Platform by Astra Taylor

    image

    Or is it? : Astra Taylor’s The People’s Platform

    Review by Zachary Loeb

    ~

    Imagine not using the Internet for twenty-four hours.

    Really: no Internet from dawn to dawn.

    Take a moment to think through the wide range of devices you would have to turn off and services you would have to avoid to succeed in such a challenge. While a single day without going online may not represent too outlandish an ordeal such an endeavor would still require some social and economic gymnastics. From the way we communicate with friends to the way we order food to the way we turn in assignments for school or complete tasks in our jobs – our lives have become thoroughly entangled with the Internet. Whether its power and control are overt or subtle the Internet has come to wield an impressive amount of influence over our lives.

    All of which should serve to raise a discomforting question – so, who is in control of the Internet? Is the Internet a fantastically democratic space that puts the power back in the hands of people? Is the Internet a sly mechanism for vesting more power in the hands of the already powerful, whilst distracting people with a steady stream of kitschy content and discounted consumerism? Or, is the Internet a space relying on levels of oft-unseen material infrastructures with a range of positive and negative potentialities? These are the questions that Astra Taylor attempts to untangle in her book The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books, 2014). It is the rare example of a book where the title itself forms a thesis statement of sorts: the Internet was and can be a platform for the people but this potential has been perverted, and thus there needs to be a “taking back” of power (and culture).

    At the outset Taylor locates her critique in the space between the fawning of the “techno-optimists” and the grousing of the “techno-skeptics.” Far from trying to assume a “neutral” stance, Taylor couches her discussion of the “techno” by stepping back to consider the social, political, and economic forces that shape the “techno” reality that inspires optimism and skepticism. Taylor, therefore, does not build her argument upon a discussion of the Internet as such but builds her argument around a discussion of the Internet as it is and as it could be. Unfortunately the “as it currently is” of this “new media” evinces that: “Corporate power and the quest for profit are as fundamental to new media as old.” (8)

    Thus Taylor sets up the conundrum of the Internet – it is at once a media platform with a great deal of democratic potential, and yet this potential has been continually appropriated for bureaucratic, technocratic, and indeed plutocratic purposes.

    Over the course of The People’s Platform Taylor moves from one aspect of the Internet (and its related material infrastructures) to another – touching upon a range of issues from the Internet’s history, to copyright and the way it has undermined “cultural creators” ability to earn a living, the way the Internet persuades and controls, across the issues of journalism and e-waste, to the ways in which the Internet can replicate the misogyny and racism of the offline world.

    With her background as a documentary filmmaker (she directed the film The Examined Life [which is excellent]) Taylor is skilled in cutting deftly from one topic to the next, though this particular experience also gives her cause to dwell at length upon the matter of how culture is created and supported in the digital age. Indeed as a maker of independent films Taylor is particularly attuned to the challenges of making culturally valuable content in a time when free copies spread rapidly on-line. Here too Taylor demonstrates the link to larger economic forces – there are still highly successful “stars” and occasional stories of “from nowhere” success, but the result is largely that those attempting to eke out a nominal subsistence find it increasingly challenging to do so.

    As the Internet becomes the principle means of dissemination of material “cultural creators” find themselves bound to a system wherein the ultimate remuneration rarely accrues back to them. Likewise the rash of profit-driven mergers and shifting revenue streams has resulted in a steady erosion of the journalistic field. It is not – as Taylor argues – that there is a lack of committed “cultural creators” and journalists working today, it is that they are finding it increasingly difficult to sustain their efforts. The Internet, as Taylor describes it, is certainly making many people enormously wealthy but those made wealthy are more likely to be platform owners (think Google or Facebook) than those who fill those platforms with the informational content that makes them valuable.

    Though the Internet may have its roots in massive public investment and though the value of the Internet is a result of the labor of Internet users (example: Facebook makes money by selling advertisements based on the work you put it in on your profile), the Internet as it is now is often less of an alternative to society than it is a replication. The biases of the offline world are replicated in the digital realm, as Taylor puts it:

    “While the Internet offers marginalized groups powerful and potentially world-changing opportunities to meet and act together, new technologies also magnify inequality, reinforcing elements of the old order. Networks do not eradicate power: they distribute it in different ways, shuffling hierarchies and producing new mechanisms of exclusion.” (108)

    Thus, the Internet – often under the guise of promoting anonymity – can be a site for an explosion of misogyny, racism, classism, and an elitism blossoming from a “more-technologically-skilled-than-thou” position. There are certainly many “marginalized groups” and individuals trying to use the Internet to battle their historical silencing, but for every social justice minded video there is a comment section seething with the grunts of trolls. Meanwhile behind this all stand the same wealthy corporate interests that enjoyed privileged positions before the rise of the Internet. These corporate forces can wield the power they gain from the Internet to steer and persuade Internet users in such a way that the “curated experience” of the Internet is increasingly another way of saying, “what a major corporation thinks you (should) want.”

    image

    Breaking through the ethereal airs of the Internet, Taylor also grounds her argument in the material realities of the digital realm. While it is true that more and more people are increasingly online, Taylor emphasizes that there are still many without access and that the high-speed access enjoyed by some is not had by one and all. Furthermore, all of this access, all of these fanciful devices, all of these democratic dreams are reliant upon a physical infrastructure shot through with dangerous mining conditions, wretched laboring facilities, and toxic dumps where discarded devices eventually go to decay. Those who are able to enjoy the Internet as a positive feature in their day to day life are rarely the same people who worked in the mines, the assembly plants, or who will have to live on the land that has been blighted by e-waste.

    While Taylor refuses to ignore the many downsides associated with the Internet age she remains fixed on its positive potential. The book concludes without offering a simplistic list of solutions but nevertheless ends with a sense that those who care about the Internet’s non-corporate potential need to work to build a “sustainable digital future” (183). Though there are certainly powerful interests profiting from the current state of the Internet the fact remains that (in a historical sense) the Internet is rather young, and there is still time to challenge the shape it is taking. Considering what needs to be done, Taylor notes: “The solutions we need require collective, political action.” (218)

    It is a suggestion that carries a sentiment that people can band together to reassert control over the online commons that are steadily being enclosed by corporate interests. By considering the Internet as a public utility (a point being discussed at the moment in regards to Net Neutrality) and by focusing on democratic values instead of financial values – it may be possible for people to reverse (or at least slow) the corporate wave which is washing over the Internet.

    After all, the Internet is the result of massive public investment, why is it that it has been delivered into corporate hands? Ultimately, Taylor concludes (in a chapter titled “In Defense of the Commons: A Manifesto for Sustainable Culture”) that if people want the Internet to be a “people’s platform” that they will have to organize and fight for it (“collective, political”). In a time when the Internet is an important feature of society, it makes a difference if the Internet is an open “people’s platform” or a highly (if subtly) controlled corporate theme park. “The People’s Platform” requires people who care to raise their voices…such as the people who have read Astra Taylor’s book, perhaps.

    * * * * *

    With The People’s Platform Astra Taylor has made an effective and interesting contribution to the discussion around the nature of the Internet and its future. By emphasizing a political and economic critique she is able to pull the Internet away from a utopian fantasy in order to analyze it in terms of the competing forces that have shaped (and continue to shape) it. The perspective that Taylor brings, as a documentary filmmaker, allows her to drop the journalistic façade of objectivity in order to genuinely and forcefully engage with issues pertaining to the compensation of cultural creators in the age of digital dissemination. Whilst the sections that Taylor writes on the level of misogyny one encounters online and the section on e-waste make this book particularly noteworthy. Though each chapter of The People’s Platform could likely be extended into an entire book, it is in their interconnections that Taylor is able to demonstrate the layers of interconnected issues that are making such a mess of the Internet today. For the problem facing the online realm is not just corporate control – it is a slew of issues that need to be recognized in total (and in their interconnected nature) if any type of response is to be mounted.

    Though The People’s Platform is ostensibly about a conflict regarding the future of the Internet, the book is itself a site of conflicting sentiments. Though Taylor – at the outset – aims to avoid aligning herself with the “cheerleaders of progress” or “the prophets of doom” (4) the book that emerges is one that is in the stands of the “cheerleaders of progress” (even if with slight misgivings about being in those stands). The book’s title suggests that even with all of the problems associated with the Internet it still represents something promising, something worth fighting to “take back.” It is a point that is particularly troublesome to consider after Taylor’s description of labor conditions and e-waste. For one of the main questions that emerges towards the end of Taylor’s book – though it is not one she directly poses – makes problematic the book’s title, that question being: which “people” are being described in “the people’s platform?”

    image

    It may be tempting to answer such a question with a simplistic “well, all of the people” yet such a response is inadequate in light of the way that Taylor’s book clearly discusses the layers of control and dominance one finds surrounding the Internet. Can the Internet be “the people’s platform” for writers, journalists, documentary filmmakers, and activists with access to digital tools? Sure. But what of those described in the e-waste chapter – people living in oppressive conditions and toiling in factories where building digital devices puts them at risk of cancer or disassembling such devices poisons them and their families? Those people count as well, but those upon whom “the people’s platform” is built seem to be crushed beneath it, not able to get on top of it – to stand on “the people’s platform” is to stand on the hunched shoulders of others. It is true that Taylor takes this into account in emphasizing that something needs to be done to recognize and rectify this matter – but insofar as the material tools “the people” use to reach the Internet are built upon the repression and oppression of other people, it sours the very notion of the Internet as “the people’s platform.”

    This in turn raises another question: what would a genuine “people’s platform” look like? In the conclusion to the book Taylor attempts to answer this question by arguing for political action and increased democratic control over the Internet; however, one can easily imagine classifying the Internet as a “public utility” without doing anything to change the laboring conditions of those who build devices. Indeed, the darkly amusing element of The People’s Platform is that Taylor answers this question brilliantly on the second page of her book and then spends the following two hundred and thirty pages ignoring this answer.

    Taylor begins The People’s Platform with an anecdote about her youth in the pre-Internet (or pre-high speed Internet) era, wherein she recalls working on a small personally assembled magazine (a “zine”) which she would then have printed and distribute to friends and a variety of local shops. Looking back upon her time making zines, Taylor writes:
    “Today any kid with a smartphone and a message has the potential to reach more people with the push of a button that I did during two years of self-publishing.” (2)

    These lines from Taylor come only a sentence after she considers how her access to easy photocopying (for her zine) made it easier for her than it had been for earlier would-be publishers. Indeed, Taylor recalls:

    “a veteran political organizer told me how he and his friends had to sell blood in order to raise the funds to buy a mimeograph machine so they could make a newsletter in the early sixties.” (2)

    There are a few subtle moments in the above lines (from the second page of Taylor’s book) that say far more about a “people’s platform” than they let on. It is true that a smartphone gives a person “the potential to reach more people” but as the rest of Taylor’s book makes clear – it is not necessarily the case that people really do “reach more people” online. There are certainly wild success stories, but for “any kid” their reach with their smartphone may not be much greater than the number of people reachable with a photocopied zine. Furthermore, the zine audience might have been more engaged and receptive than the idle scanner of Tweets or Facebook updates – the smartphone may deliver more potential but actually achieve less.

    Nevertheless, the key aspects is Taylor’s comment about the “veteran political organizer” – this organizer (“and his friends”) were able to “buy a mimeograph machine so they could make a newsletter.” Is this different from buying a laptop computer, Internet access, and a domain name? Actually? Yes. Yes, it is. For once those newsletter makers bought the mimeograph machine they were in control of it – they did not need to worry about its Terms of Service changing, about pop-up advertisements, about their movements being tracked through the device, about the NSA having installed a convenient backdoor – and frankly there’s a good chance that the mimeograph machine they purchased had a much longer life than any laptop they would purchase today. Again – they bought and were able to control the means for disseminating their message, one cannot truly buy all of the means necessary for disseminating an online message (when one includes cable, ISP providers, etc…).

    The case of the mimeograph machine and the Internet is the question of what types of technologies represent genuine people’s platforms and which result in potential “people’s platforms” (note the quotation marks)? This is not to say that mimeograph machines are perfect (after all somebody did build that machine) but when considering technology in a democratic sense it is important to puzzle over whether or not (to borrow Lewis Mumford’s terminology) the tool itself is “authoritarian” or “democratic.” The way the Internet appears in Taylor’s book – with its massive infrastructure, propensity for centralized control, material reality built upon toxic materials – should at the very least make one question to what extent the Internet is genuinely a democratic “people’s” tool. Or, whether or not it is simply such a tool for those who are able to enjoy the bulk of the benefits and a minimum of the downsides. Taylor clearly does not want to be accused of being a “prophet of doom” – or of being a prophet for profit – but the sad result is that she jumps over the genuine people’s platform she describes on the second page in favor of building an argument for a platform that, by book’s end, seems to hardly be one for “the people” in any but a narrow sense of “the people.”

    The People’s Platform: Taking Back Power and Culture in the Digital Age is a well written, solidly researched, and effectively argued book that raises many valuable questions. The book offers no simplistic panaceas but instead forces the reader to think through the issues – oftentimes by forcing them to confront uncomfortable facts about digital technologies (such as e-waste). As Taylor uncovers and discusses issue after bias after challenge regarding the Internet the question that haunts her text is whether or not the platform she is describing – the Internet – is really worthy of being called “The People’s Platform”? If so, to which “people” does this apply?

    The People’s Platform is well worth reading – but it is not the end of the conversation. It is the beginning of the conversation.

    And it is a conversation that is desperately needed.

    __

    The People’s Platform: Taking Back Power and Culture in the Digital Age
    by Astra Taylor
    Metropolitan Books, 2014

    __

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck, which is where this review originally appeared.