Category: The b2o Review

The b2o Review is a non-peer reviewed publication, published and edited by the boundary 2 editorial collective and specific topic editors, featuring book reviews, interventions, videos, and collaborative projects.  

  • Towards a Bright Mountain: Laudato Si' as Critique of Technology

    Towards a Bright Mountain: Laudato Si' as Critique of Technology

    by Zachary Loeb

    ~

    “We hate the people who make us form the connections we do not want to form.” – Simone Weil

    1. Repairing Our Common Home

    When confronted with the unsettling reality of the world it is easy to feel overwhelmed and insignificant. This feeling of powerlessness may give rise to a temptation to retreat – or to simply shrug – and though people may suspect that they bear some responsibility for the state of affairs in which they are embroiled the scale of the problems makes individuals doubtful that they can make a difference. In this context, the refrain “well, it could always be worse” becomes a sort of inured coping strategy, though this dark prophecy has a tendency to prove itself true week after week and year after year. Just saying that things could be worse than they presently are does nothing to prevent things from deteriorating further. It can be rather liberating to decide that one is powerless, to conclude that one’s actions do not truly matter, to imagine that one will be long dead by the time the bill comes due – for taking such positions enables one to avoid doing something difficult: changing.

    A change is coming. Indeed, the change is already here. The question is whether people are willing to consciously change to meet this challenge or if they will only change when they truly have no other option.

    The matter of change is at the core of Pope Francis’s recent encyclical Laudato Si’ (“Praise be to You”). Much of the discussion around Laudato Si’ has characterized the document as being narrowly focused on climate change and the environment. Though Laudato Si’ has much to say about the environment, and the threat climate change poses, it is rather reductive to cast Laudato Si’ as “the Pope’s encyclical about the environment.” Granted, that many are describing the encyclical in such terms is understandable as framing it in that manner makes it appear quaint – and may lead to many concluding that they do not need to spend the time reading through the encyclical’s 245 sections (roughly 200 pages). True, Pope Francis is interested in climate change, but in the encyclical he proves far more interested in the shifts in the social, economic, and political climate that have allowed climate change to advance. The importance of Laudato Si’ is precisely that it is less about climate change than it is about the need for humanity to change, as Pope Francis writes:

    “we cannot adequately combat environmental degradation unless we attend to causes related to human and social degradation.” (Francis, no. 48)

    And though the encyclical is filled with numerous pithy aphorisms it is a text that is worth engaging in its entirety.

    Lest there be any doubt, Laudato Si’ is a difficult text to read. Not because it is written in archaic prose, or because it assumes the reader is learned in theology, but because it is discomforting. Laudato Si’ does not tell the reader that they are responsible for the world, instead it reminds them that they have always been responsible for the world, and then points to some of the reasons why this obligation may have been forgotten. The encyclical calls on those with their heads in the clouds (or head in “the cloud”) to see they are trampling the poor and the planet underfoot. Pope Francis has the audacity to suggest, despite what the magazine covers and advertisements tell us, that there is no easy solution, and that if we are honest with ourselves we are not fulfilled by consumerism. What Laudato Si’ represents is an unabashed ethical assault on high-tech/high-consumption life in affluent nations. Yet it is not an angry diatribe. Insofar as the encyclical represents a hammer it is not as a blunt instrument with which one bludgeons foes into submission, but is instead a useful tool one might take up to pull out the rusted old nails in order to build again, as Pope Francis writes:

    “Humanity still has the ability to work together in building our common home.” (Francis, no. 13)

    Laudato Si’ is a work of intense, even radical, social criticism in the fine raiment of a papal encyclical. The text contains an impassioned critique of technology, an ethically rooted castigation of capitalism, a defense of the environment that emphasizes that humans are part of that same environment, and a demand that people accept responsibility. There is much in Laudato Si’ that those well versed in activism, organizing, environmentalism, critical theory, the critique of technology, radical political economy (and so forth) will find familiar – and it is a document that those bearing an interest in the aforementioned areas would do well to consider. While the encyclical (it was written by the Pope, after all) contains numerous references to Jesus, God, the Church, and the saints – it is clear that Pope Francis intends the document for a wide (not exclusively Catholic, or even Christian) readership. Indeed, those versed in other religious traditions will likely find much in the encyclical that echoes their own beliefs – and the same can likely be said of those interested in ethics with our without the presence of God. While many sections of Laudato Si’ speak to the religious obligation of believers, Pope Francis makes a point of being inclusive to those of different faiths (and no faith) – an inclusion which speaks to his recognition that the problems facing humanity can only be solved by all of humanity. After all:

    “we need only take a frank look at the facts to see that our common home is falling into serious disrepair.” (Francis, no. 61)

    The term “common home” refers to the planet and all those – regardless of their faith – who dwell there.

    Nevertheless, there are several sections in Laudato Si’ that will serve to remind the reader that Pope Francis is the male head of a patriarchal organization. Pope Francis stands firm in his commitment to the poor, and makes numerous comments about the rights of indigenous communities – but he does not have particularly much to say about women. While women certainly number amongst the poor and indigenous, Laudato Si’ does not devote attention to the ways in which the theologies and ideologies of dominance that have wreaked havoc on the planet have also oppressed women. It is perhaps unsurprising that the only woman Laudato Si’ focuses on at any length is Mary, and that throughout the encyclical Pope Francis continually feminizes nature whilst referring to God with terms such as “Father.” The importance of equality is a theme which is revisited numerous times in Laudato Si’ and though Pope Francis addresses his readers as “sisters and brothers” it is worth wondering whether or not this entails true equality between all people – regardless of gender. It is vital to recognize this shortcoming of Laudato Si’ – as it is a flaw that undermines much of the ethical heft of the argument.

    In the encyclical Pope Francis laments the lack of concern being shown to those – who are largely poor – already struggling against the rising tide of climate change, noting:

    “Our lack of response to these tragedies involving our brothers and sisters points to the loss of that sense of responsibility to our fellow men and women upon which all civil society is founded.” (Francis, no. 25)

    Yet it is worth pushing on this “sense of responsibility to our fellow men and women” – and doing so involves a recognition that too often throughout history (and still today) “civil society” has been founded on an emphasis on “fellow men” and not necessarily upon women. In considering responsibilities towards other people Simone Weil wrote:

    “The object of any obligation, in the realm of human affairs, is always the human being as such. There exists an obligation towards every human being for the sole reason that he or she is a human being, without any other condition requiring to be fulfilled, and even without any recognition of such obligation on the part of the individual concerned.” (Weil, 5 – The Need for Roots)

    To recognize that the obligation is due to “the human being as such” – which seems to be something Pope Francis is claiming – necessitates acknowledging that “the human being” is still often defined as male. And this is a bias that can easily be replicated, even in encyclicals that tout the importance of equality.

    There are aspects of Laudato Si’ that will give readers cause to furrow their brows; however, it would be unfortunate if the shortcomings of the encyclical led people to dismiss it completely. After all, Laudato Si’ is not a document that one reads, it is a text with which one wrestles. And, as befits a piece written by a former nightclub bouncer, Laudato Si’ proves to be a challenging and scrappy combatant. Granted, the easiest way to emerge victorious from a bout is to refuse to engage in it in the first place – which is the tactic that many seem to be taking towards Laudato Si’. Yet it should be noted that those whose responses are variations of “the Pope should stick to religion” are largely revealing that they have not seriously engaged with the encyclical. Laudato Si’ does not claim to be a scientific document, but instead recognizes – in understated terms – that:

    “A very solid scientific consensus indicates that we are presently witnessing a disturbing warming of the climate system.” (Francis, no. 23)

    And that,

    “Climate change is a global problem with grave implications: environmental, social, economic, political and for the distribution of goods. It represents one of the principal challenges facing humanity in our day. Its worst impact will probably be felt by developing countries in the coming decades.” (Francis, no. 25)

    However, when those who make a habit of paying no heed to scientists themselves make derisive comments that the Pope is not a scientist they are primarily delivering a television-news-bite-ready-quip which ignores that the climate Pope Francis is mainly concerned with today’s social, economic and political climate.

    As has been previously noted, Laudato Si’ is as much a work of stinging social criticism as it is a theological document. It is a text which benefits from the particular analysis of people – be they workers, theologians, activists, scholars, and the list could go on – with knowledge in the particular fields the encyclical touches upon. And yet, one of the most striking aspects of the encyclical – that which poses a particular challenge to the status quo – is way in which the document engages with technology.

    For, it may well be that Laudato Si’ will change the tone of current discussions around technology and its role in our lives.

    At least one might hope that it will do so.

    caption
    Image source: Photo of Pope Francis, Christoph Wagener via Wikipedia, with further modifications by the author of this piece.

    2. Meet the New Gods, Not the Same as the Old God

    Perhaps being a person of faith makes it easier to recognize the faith of others. Or, put another way, perhaps belief in God makes one attuned to the appearance of new gods. While some studies have shown that in recent years the number of individuals who do not adhere to a particular religious doctrine has risen, Laudadto Si’ suggests – though not specifically in these terms – that people may have simply turned to new religions. In the book To Be and To Have, Erich Fromm uses the term “religion” not to:

    “refer to a system that has necessarily to do with a concept of God or with idols or even to a system perceived as religion, but to any group-shared system of thought and action that offers the individual a frame of orientation and an object of devotion.” (Fromm, 135 – italics in original)

    Though the author of Laudato Si’, obviously, ascribes to a belief system that has a heck-of-a-lot to do “with a concept of God” – the main position of the encyclical is staked out in opposition to the rise of a “group-shared system of thought” which has come to offer many people both “a frame of orientation and an object of devotion.” Pope Francis warns his readers against giving fealty and adoration to false gods – gods which are as appealing to atheists as they are to old-time-believers. And while Laudato Si’ is not a document that seeks (not significantly, at least) to draw people into the Catholic church, it is a document that warns people against the religion of technology. After all, we cannot return to the Garden of Eden by biting into an Apple product.

    It is worth recognizing, that there are many reasons why the religion of technology so easily wins converts. The world is a mess and the news reports are filled with a steady flow of horrors – the dangers of environmental degradation seem to grow starker by the day, as scientists issue increasingly dire predictions that we may have already passed the point at which we needed to act. Yet, one of the few areas that continually operates as a site of unbounded optimism is the missives fired off by the technology sector and its boosters. Wearable technology, self-driving cars, the Internet of Things, delivery drones, artificial intelligence, virtual reality – technology provides a vision of the future that is not fixated on rising sea levels and extinction. Indeed, against the backdrop of extinction some even predict that through the power of techno-science humans may not be far off from being able to bring back species that had previously gone extinct.

    Technology has become a site of millions of minor miracles that have drawn legions of adherents to the technological god and its sainted corporations – and while technology has been a force present with humans for nearly as long as there have been humans, technology today seems increasingly to be presented in a way that encourages people to bask in its uncanny glow. Contemporary technology – especially of the Internet connected variety – promises individuals that they will never be alone, that they will never be bored, that they will never get lost, and that they will never have a question for which they cannot execute a web search and find an answer. If older religions spoke of a god who was always watching, and always with the believer, than the smart phone replicates and reifies these beliefs – for it is always watching, and it is always with the believer. To return to Fromm’s description of religion it should be fairly apparent that technology today provides people with “a frame of orientation and an object of devotion.” It is thus not simply that technology comes to be presented as a solution to present problems, but that technology comes to be presented as a form of salvation from all problems. Why pray if “there’s an app for that”?

    In Laudato Si’, Pope Francis warns against this new religion by observing:

    “Life gradually becomes a surrender to situations conditioned by technology, itself viewed as the principle key to the meaning of existence.” (Francis, no. 110)

    Granted, the question should be asked as to what is “the meaning of existence” supplied by contemporary technology? The various denominations of the religion of technology are skilled at offering appealing answers to this question filled with carefully tested slogans about making the world “more open and connected.” What the religion of technology continually offers is not so much a way of being in the world as a way of escaping from the world. Without mincing words, the world described in Laudato Si’ is rather distressing: it is a world of vast economic inequality, rising sea levels, misery, existential uncertainty, mountains of filth discarded by affluent nations (including e-waste), and the prospects are grim. By comparison the religion of technology provides a shiny vision of the future, with the promise of escape from earthly concerns through virtual reality, delivery on demand, and the truly transcendent dream of becoming one with machines. The religion of technology is not concerned with the next life, or with the lives of future generations, it is about constructing a new Eden in the now, for those who can afford the right toys. Even if constructing this heaven consigns much of the world’s population to hell. People may not be bending their necks in prayer, but they’re certainly bending their necks to glance at their smart phones. As David Noble wrote:

    “A thousand years in the making, the religion of technology has become the common enchantment, not only of the designers of technology but also of those caught up in, and undone by, their godly designs. The expectation of ultimate salvation through technology, whatever the immediate human and social costs, has become the unspoken orthodoxy, reinforced by a market-induced enthusiasm for novelty and sanctioned by millenarian yearnings for new beginnings. This popular faith, subliminally indulged and intensified by corporate, government, and media pitchmen, inspires an awed deference to the practitioners and their promises of deliverance while diverting attention from more urgent concerns.” (Noble, 207)

    Against this religious embrace of technology, and the elevation of its evangels, Laudato Si’ puts forth a reminder that one can, and should, appreciate the tools which have been invented – but one should not worship them. To return to Erich Fromm:

    “The question is not one of religion or not? but of which kind of religion? – whether it is one that furthers human development, the unfolding of specifically human powers, or one that paralyzes human growth…our religious character may be considered an aspect of our character structure, for we are what we are devoted to, and what we are devoted to is what motivates our conduct. Often, however, individuals are not even aware of the real objects of their personal devotion and mistake their ‘official’ beliefs for their real, though secret religion.” (Fromm, 135-136)

    It is evident that Pope Francis considers the worship of technology to be a significant barrier to further “human development” as it “paralyzes human growth.” Technology is not the only false religion against which the encyclical warns – the cult of self worship, unbridled capitalism, the glorification of violence, and the revival tent of consumerism are all considered as false faiths. They draw adherents in by proffering salvation and prescribing a simple course of action – but instead of allowing their faithful true transcendence they instead warp their followers into sycophants.

    Yet the particularly nefarious aspect of the religion of technology, in line with the quotation from Fromm, is the way in which it is a faith to which many subscribe without their necessarily being aware of it. This is particularly significant in the way that it links to the encyclical’s larger concern with the environment and with the poor. Those in affluent nations who enjoy the pleasures of high-tech lifestyles – the faithful in the religion of technology – are largely spared the serious downsides of high-technology. Sure, individuals may complain of aching necks, sore thumbs, difficulty sleeping, and a creeping sense of dissatisfaction – but such issues do not tell of the true cost of technology. What often goes unseen by those enjoying their smart phones are the exploitative regimes of mineral extraction, the harsh labor conditions where devices are assembled, and the toxic wreckage of e-waste dumps. Furthermore, insofar as high-tech devices (and the cloud) require large amounts of energy it is worth considering the degree to which high-tech lifestyles contribute to the voracious energy consumption that helps drive climate change. Granted, those who suffer from these technological downsides are generally not the people enjoying the technological devices.

    And though Laudato Si’ may have a particular view of salvation – one need not subscribe to that religion to recognize that the religion of technology is not the faith of the solution.

    But the faith of the problem.

    3. Laudato Si’ as Critique of Technology

    Relatively early in the encyclical, Pope Francis decries how, against the background of “media and the digital world”:

    “the great sages of the past run the risk of going unheard amid the noise and distractions of an information overload.” (Frances, no. 47)

    Reading through Laudato Si’ it becomes fairly apparent who Pope Francis considers many of these “great sages” to be. For the most part Pope Francis cites the encyclicals of his predecessors, declarations from Bishops’ conferences, the bible, and theologians who are safely ensconced in the Church’s wheelhouse. While such citations certainly help to establish that the ideas being put forth in Laudato Si’ have been circulating in the Catholic Church for some time – Pope Francis’s invocation of “great sages of the past…going unheard” raises a larger question. How much of the encyclical is truly new and how much is a reiteration of older ideas that have gone “unheard?” In fairness, the social critique being advanced by Laudato Si’ may strike many people as novel – particularly in terms of its ethically combative willingness to take on technology – but it may be that the significant thing about Laudato Si’ is not that the message is new, but that the messenger is new. Without wanting to decry or denigrate Laudato Si’ it is worth noting that much of the argument being presented in the document could previously be found in works by thinkers associated with the critique of technology, notably Lewis Mumford and Jacques Ellul. Indeed, the following statement, from Lewis Mumford’s Art and Technics, could have appeared in Laudato Si’ without seeming out of place:

    “We overvalue the technical instrument: the machine has become our main source of magic, and it has given us a false sense of possessing godlike powers. An age that has devaluated all its symbols has turned the machine itself into a universal symbol: a god to be worshiped.” (Mumford, 138 – Art and Technics)

    The critique of technology does not represent a cohesive school of thought – rather it is a tendency within several fields (history and philosophy of technology, STS, media ecology, critical theory) that places particular emphasis on the negative impacts of technology. What many of these thinkers emphasized was the way in which the choices of certain technologies over others winds up having profound impacts upon the shape of a society. Thus, within the critique of technology, it is not a matter of anything so ridiculously reductive as “technology is bad” but of considering what alternative forms technology could take: “democratic technics” (Mumford), “convivial tools” (Illich), “appropriate technology” (Schumacher), “liberatory technology” (Bookchin), and so forth. Yet what is particularly important is the fact that the serious critique of technology was directly tied to a critique of the broader society. And thus, Mumford also wrote extensively about urban planning, architecture and cities – while Ellul wrote as much (perhaps more) about theological issues (Ellul was a devout individual who described himself as a Christian anarchist).

    With the rise of ever more powerful and potentially catastrophic technological systems, many thinkers associated with the critique of technology began issuing dire warnings about the techno-science wrought danger in which humanity had placed itself. With the appearance of the atomic bomb, humanity had invented the way to potentially bring an end to the whole of the human project. Galled by the way in which technology seemed to be drawing ever more power to itself, Ellul warned of the ascendance of “technique” while Mumford cautioned of the emergence of “the megamachine” with such terms being used to denote not simply technology and machinery but the fusion of techno-science with social, economic and political power – though Pope Francis seems to prefer to use the term “technological paradigm” or “technocratic paradigm” instead of “megamachine.” When Pope Francis writes:

    “The technological paradigm has become so dominant that it would be difficult to do without its resources and even more difficult to utilize them without being dominated by their internal logic.” (Francis, no. 108)

    Or:

    “the new power structures based on the techno-economic paradigm may overwhelm not only our politics but also freedom and justice.” (Francis, no. 53)

    Or:

    “The alliance between the economy and technology ends up sidelining anything unrelated to its immediate interests.” (Francis, no. 54)

    These are comments that are squarely in line with Ellul’s comment that:

    Technical civilization means that our civilization is constructed by technique (makes a part of civilization only that which belongs to technique), for technique (in that everything in this civilization must serve a technical end), and is exclusively technique (in that it excludes whatever is not technique or reduces it to technical forms).” (Ellul, 128 – italics in original)

    A particular sign of the growing dominance of technology, and the techno-utopian thinking that everywhere evangelizes for technology, is the belief that to every problem there is a technological solution. Such wishful thinking about technology as the universal panacea was a tendency highly criticized by thinkers like Mumford and Ellul. Pope Francis chastises the prevalence of this belief at several points, writing:

    “Obstructionist attitudes, even on the part of believers, can range from denial of the problem to indifference, nonchalant resignation or blind confidence in technical solutions.” (Francis, no. 14)

    And the encyclical returns to this, decrying:

    “Technology, which, linked to business interests, is presented as the only way of solving these problems,” (Francis, no. 20)

    There is more than a passing similarity between the above two quotations from Pope Francis’s 2015 encyclical and the following quotation from Lewis Mumford’s book Technics and Civilization (first published in 1934):

    “But the belief that the social dilemmas created by the machine can be solved merely by inventing more machines is today a sign of half-baked thinking which verges close to quackery.” (Mumford, 367)

    At the very least this juxtaposition should help establish that there is nothing new about those in power proclaiming that technology will solve everything, but just the same there is nothing particularly new about forcefully criticizing this unblinking faith in technological solutions. If one wanted to do so it would not be an overly difficult task to comb through Laudato Si’ – particularly “Chapter Three: The Human Roots of the Ecological Crisis” – and find a couple of paragraphs by Mumford, Ellul or another prominent critic of technology in which precisely the same thing is being said. After all, if one were to try to capture the essence of the critique of technology in two sentences, one could do significantly worse than the following lines from Laudato Si’:

    “We have to accept that technological products are not neutral, for they create a framework which ends up conditioning lifestyles and shaping social possibilities along the lines dictated by the interests of certain powerful groups. Decisions which may seem purely instrumental are in reality decisions about the kind of society we want to build.” (Francis, no. 107)

    Granted, the line “technological products are not neutral” may have come as something of a disquieting statement to some readers of Laudato Si’ even if it has long been understood by historians of technology. Nevertheless, it is the emphasis placed on the matter of “the kind of society we want to build” that is of particular importance. For the encyclical does not simply lament the state of the technological world, it advances an alternative vision of technology – one which recognizes the tremendous potential of technological advances but sees how this potential goes unfulfilled. Laudato Si’ is a document which is skeptical of the belief that smart phones have made people happier, and it is a text which shows a clear unwillingness to believe that large tech companies are driven by much other than their own interests. The encyclical bears the mark of a writer who believes in a powerful God and that deity’s prophets, but has little time for would-be all powerful corporations and their lust for profits. One of the themes that ran continuously throughout Lewis Mumford’s work was his belief that the “good life” had been overshadowed by the pursuit of the “goods life” – and a similar theme runs through Laudato Si’ wherein the analysis of climate change, the environment, and what is owed to the poor, is couched in a call to reinvigorate the “good life” while recognizing that the “goods life” is a farce. Despite the power of the “technological paradigm,” Pope Francis remains hopeful regarding the power of people, writing:

    “We have the freedom needed to limit and direct technology; we can put it at the service of another type of progress, one which is healthier, more human, more social, more integral. Liberation from the dominant technocratic paradigm does in fact happen sometimes, for example, when cooperatives of small producers adopt less polluting methods of production, and opt for a non-consumerist model of life, recreation and community. Or when technology is directed primarily to resolving people’s concrete problems, truly helping them live with more dignity and less suffering.” (Francis, no. 112)

    In the above quotation, what Pope Francis is arguing for is the need for, to use Mumford’s terminology, “democratic technics” to replace “authoritarian technics.” Or, to use Ivan Illich’s terms (and Illich was himself a Catholic priest) the emergence of a “convivial society” centered around “convivial tools.” Granted, as is perhaps not particularly surprising for a call to action, Pope Francis tends to be rather optimistic about the prospects individuals have for limiting and directing technology. For, one of the great fears shared amongst numerous critics of technology was the belief that the concentration of power in “technique” or “the megamachine” or the “technological paradigm” gradually eliminated the freedom to limit or direct it. That potential alternatives emerged was clear, but such paths were quickly incorporated back into the “technological paradigm.” As Ellul observed:

    “To be in technical equilibrium, man cannot live by any but the technical reality, and he cannot escape from the social aspect of things which technique designs for him. And the more his needs are accounted for, the more he is integrated into the technical matrix.” (Ellul, 224)

    In other words, “technique” gradually eliminates the alternatives to itself. To live in a society shaped by such forces requires an individual to submit to those forces as well. What Laudato Si’ almost desperately seeks to claim, to the contrary, is that it is not too late, that people still have the ability “to limit and direct technology” provided they tear themselves away from their high-tech hallucinations. And this earnest belief is the hopeful core of the encyclical.

    Ethically impassioned books and articles decrying what a high consumption lifestyle wreaks upon the planet and which exhort people to think of those who do not share in the thrill of technological decadence are not difficult to come by. And thus, the aspect of Laudato Si’ which may be the most radical and the most striking are the sections devoted to technology. For what the encyclical does so impressively is that it expressly links environmental destruction and the neglect of the poor with the religious allegiance to high-tech devices. Numerous books and articles appear on a regular basis lamenting the current state of the technological world – and yet too often the authors of such texts seem terrified of being labeled “anti-technology.” Therefore, the authors tie themselves in knots trying to stake out a position that is not evangelizing for technology but at the same time they refuse to become heretics to the religion of technology – and as a result they easily become the permitted voices of dissent who only seem to empower the evangels as they conduct the debate on the terms of technological society. They try to reform the religion of technology instead of recognizing that it is a faith premised upon worshiping a false god. After all, one is permitted to say that Google is getting too big, that the Apple Watch is unnecessary, and that Internet should be called “the surveillance mall” – but to say:

    “There is a growing awareness that scientific and technological progress cannot be equated with the progress of humanity and history, a growing sense that the way to a better future lies elsewhere.” (Francis, no. 113)

    Well…one rarely hears such arguments today, precisely because the dominant ideology of our day places ample faith in equating “scientific and technological progress” with progress, as such. Granted, that was the type of argument being made by the likes of Mumford and Ellul – though the present predicament makes it woefully evident that too few heeded their warnings. Indeed a leitmotif that can be detected amongst the works of many critics of technology is a desire to be proved wrong, as Mumford wrote:

    “I would die happy if I knew that on my tombstone could be written these words, ‘This man was an absolute fool. None of the disastrous things that he reluctantly predicted ever came to pass!’ Yes: then I could die happy.” (Mumford, 528 – My Works and Days)

    Yet to read over Mumford’s predictions in the present day is to understand why those words are not carved into his tombstone – for Mumford was not an “absolute fool,” he was acutely prescient. Though, alas, the likes of Mumford and Ellul too easily number amongst the ranks of “the great sages of the past” who, in Pope Francis’s words, “run the risk of going unheard amid the noise and distractions of an information overload.”

    Despite the issues that various individuals will certainly have with Laudato Si’ – ranging from its stance towards women to its religious tonality – the element that is likely to disquiet the largest group is its serious critique of technology. Thus, it is somewhat amusing to consider the number of articles that have been penned about the encyclical which focus on the warnings about climate change but say little about Pope Francis’s comments about the danger of the “technological paradigm.” For the encyclical commits a profound act of heresy against the contemporary religion of technology – it dares to suggest that we have fallen for the PR spin about the devices in our pockets, it asks us to consider if these devices are truly filling an existential void or if they are simply distracting us from having to think about this absence, and the encyclical reminds us that we need not be passive consumers of technology. These arguments about technology are not new, and it is not new to make them in ethically rich or religiously loaded language; however, these are arguments which are verboten in contemporary discourse about technology. Alas, those who make such claims are regularly derided as “Luddites” or “NIMBYs” and banished to the fringes. And yet the historic Luddites were simply workers who felt they had the freedom “to limit and direct technology,” and as anybody who knows about e-waste can attest when people in affluent nations say “Not In My Back Yard” the toxic refuse simply winds up in somebody else’s back yard. Pope Francis writes that today:

    “It has become countercultural to choose a lifestyle whose goals are even partly independent of technology, of its costs and its power to globalize and make us all the same.” (Francis, no. 108)

    And yet, what Laudato Si’ may represent is an important turning point in discussions around technology, and a vital opportunity for a serious critique of technology to reemerge. For what Laudato Si’ does is advocate for a new cultural paradigm based upon harnessing technology as a tool instead of as an absolute. Furthermore, the inclusion of such a serious critique of technology in a widely discussed (and hopefully widely read) encyclical represents a point at which rigorously critiquing technology may be able to become less “countercultural.” Laudato Si’ is a profoundly pro-culture document insofar as it seeks to preserve human culture from being destroyed by the greed that is ruining the planet. It is a rare text that has the audacity to state: “you do not need that, and your desire for it is bad for you and bad for the planet.”

    Laudato Si’ is a piece of fierce social criticism, and like numerous works from the critique of technology, it is a text that recognizes that one cannot truly claim to critique a society without being willing to turn an equally critical gaze towards the way that society creates and uses technology. The critique of technology is not new, but it has been sorely underrepresented in contemporary thinking around technology. It has been cast as the province of outdated doom mongers, but as Pope Francis demonstrates, the critique of technology remains as vital and timely as ever.

    Too often of late discussions about technology are conducted through rose colored glasses, or worse virtual reality headsets – Laudato Si’ dares to actually look at technology.

    And to demand that others do the same.

    4. The Bright Mountain

    The end of the world is easy.

    All it requires of us is that we do nothing, and what can be simpler than doing nothing? Besides, popular culture has made us quite comfortable with the imagery of dystopian states and collapsing cities. And yet the question to ask of every piece of dystopian fiction is “what did the world that paved the way for this terrible one look like?” To which the follow up question should be: “did it look just like ours?” And to this, yet another follow up question needs to be asked: “why didn’t people do something?” In a book bearing the uplifting title The Collapse of Western Civilization Naomi Oreskes and Erik Conway analyze present inaction as if from the future, and write:

    “the people of Western civilization knew what was happening to them but were unable to stop it. Indeed, the most startling aspect of this story is just how much these people knew, and how unable they were to act upon what they knew.” (Oreskes and Conway, 1-2)

    This speaks to the fatalistic belief that despite what we know, things are not going to change, or that if change comes it will already be too late. One of the most interesting texts to emerge in recent years in the context of continually ignored environmental warnings is a slim volume titled Uncivilisation: The Dark Mountain Manifesto. It is the foundational text of a group of writers, artists, activists, and others that dares to take seriously the notion that we are not going to change in time. As the manifesto’s authors write:

    “Secretly, we all think we are doomed: even the politicians think this; even the environmentalists. Some of us deal with it by going shopping. Some deal with it by hoping it is true. Some give up in despair. Some work frantically to try and fend off the coming storm.” (Hine and Kingsnorth, 9)

    But the point is that change is coming – whether we believe it or not, and whether we want it or not. But what is one to do? The desire to retreat from the cacophony of modern society is nothing new and can easily sow the fields in which reactionary ideologies can grow. Particularly problematic is that the rejection of the modern world often entails a sleight of hand whereby those in affluent nations are able to shirk their responsibility to the world’s poor even as they walk somberly, flagellating themselves into the foothills. Apocalyptic romanticism, whether it be of the accelerationist or primitivist variety, paints an evocative image of the world of today collapsing so that a new world can emerge – but what Laudato Si’ counters with is a morally impassioned cry to think of the billions of people who will suffer and die. Think of those for whom fleeing to the foothills is not an option. We do not need to take up residence in the woods like latter day hermetic acolytes of Francis of Assisi, rather we need to take that spirit and live it wherever we find ourselves.

    True, the easy retort to the claim “secretly, we all think we are doomed” is to retort “I do not think we are doomed, secretly or openly” – but to read climatologists predictions and then to watch politicians grouse, whilst mining companies seek to extract even more fossil fuels is to hear that “secret” voice grow louder. People have always been predicting the end of the world, and here we still are, which leads many to simply shrug off dire concerns. Furthermore, many worry that putting too much emphasis on woebegone premonitions overwhelms people and leaves them unable and unwilling to act. Perhaps this is why Al Gore’s film An Inconvenient Truth concludes not by telling people they must be willing to fundamentally alter their high-tech/high-consumption lifestyles but instead simply tells them to recycle. In Laudato Si’ Pope Francis writes:

    “Doomsday predictions can no longer be met with irony or disdain. We may well be leaving to coming generations debris, desolation and filth.” (Francis, no. 161)

    Those lines, particularly the first of the two, should be the twenty-first century replacement for “Keep Calm and Carry On.” For what Laudato Si’ makes clear is that now is not the time to “Keep Calm” but to get very busy, and it is a text that knows that if we “Carry On” than we are skipping aimlessly towards the cliff’s edge. And yet one of the elements of the encyclical that needs to be highlighted is that it is a document that does not look hopefully towards a coming apocalypse. In the encyclical, environmental collapse is not seen as evidence that biblical preconditions for Armageddon are being fulfilled. The sorry state of the planet is not the result of God’s plan but is instead the result of humanity’s inability to plan. The problem is not evil, for as Simone Weil wrote:

    “It is not good which evil violates, for good is inviolate: only a degraded good can be violated.” (Weil, 70 – Gravity and Grace)

    It is that the good of which people are capable is rarely the good which people achieve. Even as possible tools for building the good life – such as technology – are degraded and mistaken for the good life. And thus the good is wasted, though it has not been destroyed.

    Throughout Laudato Si’, Pope Francis praises the merits of an ascetic life. And though the encyclical features numerous references to Saint Francis of Assisi, the argument is not that we must all abandon our homes to seek out new sanctuary in nature, instead the need is to learn from the sense of love and wonder with which Saint Francis approached nature. Complete withdrawal is not an option, to do so would be to shirk our responsibility – we live in this world and we bear responsibility for it and for other people. In the encyclical’s estimation, those living in affluent nations cannot seek to quietly slip from the scene, nor can they claim they are doing enough by bringing their own bags to the grocery store. Rather, responsibility entails recognizing that the lifestyles of affluent nations have helped sow misery in many parts of the world – it is unethical for us to try to save our own cities without realizing the part we have played in ruining the cities of others.

    Pope Francis writes – and here an entire section shall be quoted:

    “Many things have to change course, but it is we human beings above all who need to change. We lack an awareness of our common origin, of our mutual belonging, and of a future to be shared with everyone. This basic awareness would enable the development of new conviction, attitudes and forms of life. A great cultural, spiritual and educational challenge stands before us, and it will demand that we set out on the long path of renewal.” (Francis, no. 202)

    Laudato Si’ does not suggest that we can escape from our problems, that we can withdraw, or that we can “keep calm and carry on.” And though the encyclical is not a manifesto, if it were one it could possibly be called “The Bright Mountain Manifesto.” For what Laudato Si’ reminds its readers time and time again is that even though we face great challenges it remains within our power to address them, though we must act soon and decisively if we are to effect a change. We do not need to wander towards a mystery shrouded mountain in the distance, but work to make the peaks near us glisten – it is not a matter of retreating from the world but of rebuilding it in a way that provides for all. Nobody needs to go hungry, our cities can be beautiful, our lifestyles can be fulfilling, our tools can be made to serve us as opposed to our being made to serve tools, people can recognize the immense debt they owe to each other – and working together we can make this a better world.

    Doing so will be difficult. It will require significant changes.

    But Laudato Si’ is a document that believes people can still accomplish this.

    In the end Laudato Si’ is less about having faith in god, than it is about having faith in people.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, on which an earlier version of this post first appeared. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    Pope Francis. Encyclical Letter Laudato Si’ of the Holy Father Francis on Care For Our Common Home. Vatican Press, 2015. [Note – the numbers ins all citations from this document refer to the section number, not the page number]

    Ellul, Jacques. The Technological Society. Vintage Books, 1964.

    Fromm, Erich. To Be and To Have. Harper & Row, Publishers, 1976.

    Hine, Dougald and Kingsnorth, Paul. Uncivilization: The Dark Mountain Manifesto. The Dark Mountain Project, 2013.

    Mumford, Lewis. My Works and Days: A Personal Chronicle. Harcourt, Brace, Jovanovich, 1979.

    Mumford, Lewis. Art and Technics. Columbia University Press, 2000.

    Mumford, Lewis. Technics and Civilization. University of Chicago Press, 2010.

    Noble, David. The Religion of Technology. Penguin, 1999.

    Oreskes, Naomi and Conway, Erik M. The Collapse of Western Civilization: A View from the Future. Columbia University Press, 2014.

    Weil, Simone. The Need for Roots. Routledge Classics, 2002.

    Weil, Simone. Gravity and Grace. Routledge Classics, 2002. (the quote at the beginning of this piece is found on page 139 of this book)

  • Hubris and Heteronomy: A Review of Lessons in Secular Criticism

    Hubris and Heteronomy: A Review of Lessons in Secular Criticism

    A Review of Stathis Gourgouris’s Lessons in Secular Criticism

    by Jason Stevens

    In Spring 2013, boundary 2 published a special issue, Antinomies of the Postsecular, which assessed the so-called “turn to religion” in the humanities and social sciences. Under the movement term “postsecularism,” this academic turn to religion, commends itself as a necessary response to “the return of religion” as a social and political force in contemporary life. Whether such a “return” has actually occurred and what is at stake in making this assertion was the subject of b2’s special issue. The goal of the contributors to Antinomies of the Postsecular, as editor Aamir Mufti explained in his introduction, was to expose “the internal conceptual incoherence” of postsecularism as “an emergent orthodoxy” and to question the “political affiliations” of secularism’s critics, as revealed “by their treatment of modern religiosity” (3, 4).

    One major source of incoherence is postsecularism’s account of secularization as a closed process with an expiration date for religion. By this misreading of Weber, to cite one of postsecularism’s bugbears, secularization is an abject failure. The global persistence of religion, which b2’s issue acknowledges as a neutral historical fact, is mistakenly interpreted as a resurgence or revival pressing up from cultures in resistance to secularism. The latter is conceived as an anti-popular and imperialistic instrument of domination, having its sources in European Enlightenment and the hubris of Western reason. Genealogical critiques of Enlightenment/secularism add to the irony of secularization’s alleged failure by detecting ghosted forms of “the sacred,” having Christian derivation, within reason’s self-understanding and within the liberal political imaginaries that rationalism underwrites. Reason is indebted to that which it disavows and tries to sequester; it was doomed to misprize the intimate entanglement of religion with culture and politics. Reacting to this misbegotten rule of reason, postsecularists resort to “culturalism”: shielding religion from external judgment by defending it as an expression of profoundly rooted local sensibilities (or, in more Foucauldian language, “practices” and “discourses”) buffering subjectivities against modernizing deracination and disciplinary schemes. Post-secularism’s attack on the ways that the Western liberal states have inscribed and bounded religion(s) thus frames these problems, which are certainly worthy of address, such that the secular is undermined as a source of analytical questioning while religion is insulated as a source of identity, filiation, and empowerment. Whatever the merits of this understanding of “the return to religion” – b2’s contributors found few – the conclusions that it reaches align post-secularism with some strange bedfellows: the anti-secular positions of religious fundamentalisms and conservative political theologies as well as those of religiously inflected liberation movements. Post-secularists may not seek some of these political affiliations and may even find them undesirable in many particulars, but their reading of modernity’s ailments finds enemies in common. Proponents of skepticism and intellectual consensus, for example, can find themselves on the defensive because they have the effrontery to throw acids on a people’s traditional beliefs.

    Stathis Gourgouris, one of the scholars featured in Antinomies of the Postsecular, has elaborated his case against postsecularism in Lessons in Secular Criticism (Fordham, 2013), the first of a planned triptych that will include The Perils of the One and Nothing Sacred. Professor of Comparative Literature and Director of the Institute for the Comparative Literature and Society at Columbia University, Gourgouris comes to this project well-equipped by his previous works, the books Dream Nation: Enlightenment, Colonization, and the Institution of Modern Greece (Stanford 1996), Does Literature Think? Literature as Theory for an Antimythical Era (Stanford, 2003), Freud and Fundamentalism (Fordham 2010); his translations of sociologist Cornelius Castoriadis (who is an intellectual touchstone for Gourgouris in this book); his two essays for Antinomies of the Post-Secular, one of which, “Why I Am Not a Post-Secularist,” is reproduced here; and his heated debate with Saba Mahmood in The Immanent Frame, which was one of the highlights of the on-line journal’s 2008 exchange, “Is Critique Secular?”

    lesson in Secular Criticism

    The “secular criticism” in Gourgouris’s title has its provenance in Edward Said’s The World, the Text, and the Critic (1983), and his set of “lessons” (in the post-structuralist sense of the leçon, or ceaselessly thinking reflexively) can be seen as extending b2’s ongoing task of theorizing what Said meant by his conjoining of “secular” and “critique.” For Gourgouris, the two are inextricable, the first in its worldly orientation making possible the articulation of the second. Secular criticism is “an experimental, often interrogative practice, alert to contingencies and skeptical toward whatever escapes the worldly”; particularly, it is skeptical toward any notion of “authority that is assumed to emerge from elsewhere,” toward any knowledge “presented as sovereign, unmarked by whatever social-historical institution actually possesses it” (13, 64, xiv). These knowledges include discourses of secularism that would make any legal-political boundary between religion and the state rest on a metaphysical distinction between the secular and the religious wrongly conceived as essences. This is Gourgouris’s key dialectical movie: to preserve the secular as a practice and as “a space” that makes the practice possible, it must be defined over and beyond the limitations imposed on it by both academic post-secularism and secularism as an institutional power.

    Lessons is organized into six chapters, the first half breaking down flawed conceptions of the secular and the second half building Gourgouris’ case that secular criticism is necessary if we are to imagine more democratic societies than we presently know. Chapter One, “The Poiein of Secular Criticism,” disputes anthropologist Talal Asad’s effort to draw a lineage for the notion of critique that traces it to Platonic and Christian traditions. Asad discovers in critique a displaced religious attitude: a quest after and veneration of the Truth, abstracted from an image of God but still bearing the imprint of monotheism, for the Truth of the critic is unalterable, inalienable, and singular (8). In other words, Said’s fearless intellectual inherits a practice of thinking made possible by religious/mystical modes of contemplation and rigorous ascesis of the subjective. For Gourgouris, the irony Asad relishes in this situation is willfully produced by his genealogy, which does not so much trace continuities as force analogies between worldly criticism and a “theological desire.” For the analogy to function, it requires a representation of “secular” criticism (Asad would effectively put Said’s adjective in scare quotes) as the effort to clear man’s thinking for the revelation of “a hypergood.”

    Gourgouris instead sees critique as an activity like poiesis. Here Gourgouris is returning in capsule form to the theory of poetics that he develops at length in Does Literature Think? In that work, through meticulous close readings of Sophocles, Flaubert, Benjamin, Kafka, Celan, Genet, and DeLillo, Gourgouris models poesis as a unique kind of cognition that requires the making of things not thought before, and that in making these things also unmakes what is given: “to form is to make form happen, to change form (including one’s own)”(11). Poiesis is a making of the new and unmaking of the known materials of society (discourses, images, narratives) that potentiates far-reaching self-alteration: “things that may indeed appear to be impossible in the present time . . . cannot be said to be generically impossible, impossible for all time” (26). As Gourgouris proceeds, poiesis is valuable because its most sophisticated artistic products dramatize what critique also endeavors to enact: autonomy (auto-nomos), understood here not as reason’s free submission to “the hypergood,” but as the questioning, historicizing, and pluralizing of the authorities (epistemological, political) to which self-altering subjects give only provisional consent.

    In trying to define secular criticism away from Said, Asad erroneously conceives it as a quest after a transcendental. The uncovered Truth, in this conceptualization, becomes a law given to the self from elsewhere, like a command from the almighty. In Gourgouris’s estimation, Asad makes the critic’s relation to reason heteronomous. Heteronomy, which receives greater elaboration in Chapter Four, is both a structure of decision and a state of alienation. In contrast to autonomy, heteronomy describes a structure in which “the law” (the reason for deciding) is given externally, from the other. For Gourgouris, all law is self-generated out of the social imaginaries of existing communities. Heteronomy therefore cannot exist except in a state where the law has been othered, occulted in a beyond that is made more real, more authoritative in being both beyond and more real, than the humble state in which men direct their own affairs. Whenever humans sever themselves from this worldly state of decision-making and institute an absolute other for sanctioning what they do, they have created a heteronomous structure. Under the self-alienated conditions of heteronomy, decisions take the form of a command/obedience structure, in which one listens rather than questions. Any transcendental is, intrinsically, something that commands, even though it is produced by the humans who obey it.

    Having countered Asad’s attempt to impose a heteronomous structure on critique, Gourgouris’s second chapter proceeds to ferret out the transcendental in secularism. By the latter, Gourgouris refers to an institutional term representing “a range of prospects in the exercise of power,” particularly as pertains to state mechanisms (28-29). A priori and dogmatic substantiations of secularism Gourgouris deems “metaphysical,” and this adjective functions similarly to “transcendental” in the book’s proliferative terminology. However, there is a subtle reason for the differentiation that proves important. The “metaphysical” ends up being the name for any non-theistic statement of transcendental first principles; it designates whatever is taken to be an incontestable foundation, without confounding the notional foundation with the sacred of theology or religion (29). A metaphysic and a divine law are each, in application, heteronomous, but the former is “a set of principles that posit themselves independently of historical reality” rather than something held sacred that eternal God has posited (30). It is crucial for Gourgouris to provide these dual definitions, for his opponents, Talal Asad and anthropologist Saba Mahmood , discern secularism’s metaphysical layers only to theologize them for the purpose of revealing modernity’s disavowed religious substrata: “It is one thing to speak of the metaphysics of secularism and another to equate secularism with religion” (34).

    An example of one of the “metaphysics” on which secularism rests would be the pre-social individual theorized by classical liberalism. It is the sanctity of this individual, god-like in his agency, his clarity, and his identity with himself, that secularism is often said to protect from religious intolerance. In contrast, Gourgouris sees this figure of bourgeois enlightenment caught up in the self-altering forces unleashed by a still ongoing process of secularization. The form of autonomy that secularization bares for view is thoroughly social in character (44). To be autonomous is not to give oneself the law, but, as citizens, to give the law to ourselves. It is as social members that we decide what the law is; to be autonomous is to not only to give ourselves the law, but also to recognize ourselves interrogating the law together. Catholic philosopher Charles Taylor, who emerges as another opponent in Gourgouris’ Chapter Two (titled “De-Transcendentalizing the Secular”), has also famously argued against the reified idea of the individual in classical liberalism. However, he believes that our modern social imaginaries have built such protective carapaces around the self that we have difficulty experiencing an outside to its liberal representation. In well-known formulations, he has described the modern self as too “buffered” against any motivations that can be confused with enchantment. As a result, modern man – for all his sense of self-mastery – is actually dispossessed, haunted by a God-reference that has been voided of transcendence, though modern man still needs the transformational openness that God once provided. In other words, Taylor does not theologize the secular, as do Asad and Mahmood, but he does see it as impoverished. Gourgouris objects to “Taylor’s whole framework of valuation and determination,” and he pivots to Taylor’s A Secular Age (2007) for the purpose of redefining alterity without resort to a “heteronomous” position outside history (43). Taylor is wrong to say that moderns need transcendence in order to experience a liberating otherness. Recalling his theory of poiesis, Gourgoruis argues that the otherness is something created by the self in its working upon the materials it finds within the world. This otherness is “immanent,” emerging from within autonomy, and involves no inrushing from a space beyond history: “The immanence of autonomy does not mean closure in a purely self-referential or self-sufficient signification . . . . Autonomy is nonsensical as a permanent state, as the property of a thing, which is why it has nothing to do with the imaginary of self-possession or the legacy of possessive individualism that is the crux of liberal law” (44). That Taylor cannot see the possibility of human satisfaction in autonomous self-alteration, whether achieved via politics, art, or eros, is a measure of his melancholic appraisal of the worldly: “Taylor cannot fathom that fullness, total plenitude and fulfillment, can be found in the finite and the fragile, in the ephemeral and the mortal, in the uncertain and the passing” (41). It is Gourgouris’ task, in his third chapter, “Why I Am Not a Post-Secularist,” to defend the sufficiency of the finite and the mortal to answer human striving and imagining.

    “I am not a post-secularist,” he states with bald conviction, “because I am an atheist.” This first line begins the most eloquent of all the book’s chapters. Within its concentrated length, Gourgouris not only provides a vigorous case for atheism against its cultured despisers, but also builds his case that only a secular space, oriented toward a future in which the distinction theism/atheism will no longer matter, can produce the conditions for radical democratic politics to thrive. Since the second point is one that Gourgouris will amplify in subsequent chapters, I will also defer addressing it here and focus for the time being on his case for atheism. In marked contrast to the New Atheists, Gourgouris does not bother with demolishing proofs of God or citing evidence pointing up the absurdity of biblical accounts of creation or belief in miracles. To quote Wallace Stevens, Gourgouris plainly looks out from a horizon in which the gods are “dispelled in mid-air and dissolve[d] like clouds,” and makes “no cry for their return.” God’s death is a Christian idea. Outside the Christian imaginary, where Gourgouris places himself, the de-sacralization of society inflicts no melancholia – no divine haunting, absence, or silence, none of the governing motifs of writings that have seen in modernity a state of ruination. At the same time, there is nothing heroic in Gourgouris’s atheism either, for the question of God’s existence is no great either-or in his thought. The question is “irrelevant” to the secular consciousness he wishes others to imagine with him: “It would mean to live not as if God does not exist but to live as if God does not matter” (69). Rather than a ruined world doddering from shorn foundations, Gourgouris finds in a terrene of finite things, and ineluctable death, much cause for “wonder.” The word, connecting philosophy and myth in Greek, links aesthetic pleasure and speculation in Gourgouris’s usage; the experience of wonder felt in the human encounter with what is new and extraordinary discredits miracles, for it leads to questioning. Furthermore, it replaces the need for such beliefs with the pleasure taken in curiosity and in creative acts of pattern-making that give a feeling of intelligibility to reality. Reaching back to the Greeks as a touchstone, Gourgouris treats hubris as a passion imperceptibly sliding behind wonder that he condones in advance of its appearance as a specifiable motive. Hubris is conventionally the other to Truth, but Gourgouris prefers its risks to heteronomy (76). Still, there is a tragic element in Gourgouris’ account of a desacralized world. It stems not, as in pessimistic readings of Greek tragedy, from the defiance of a transcendental order. It is the “irredeemably sad” recognition that autonomy is possible only under conditions of impermanence. History is radically open-ended and shaped solely by human self-determination, and that very limitlessness is not circumscribed by death, but extended by it, for death denaturalizes all humanly constructed boundaries (106). The lucidity for which Gourgouris calls in these passages recalls Camus’s tragic humanism, except that Gourgouris’ never passes through despair.
    Atheism, then, is tragic autonomy, attuned to the wonder as well as the mutability of finite existence and undaunted by the Christian proposition of the death of God. While I agree with Gourgouris that Christianity makes God’s death central to salvation history, I do not believe that he accurately represents this event’s theological significance within orthodox belief. Moreover, I believe that he unnecessarily dualizes the Christian and Greek imaginaries.

    To take up the first objection, Gourgouris mistakenly summarizes dogma as such: “God dies so that he may be resurrected, simple as that. The instrumental outcome is all that matters (the abolition of sin happens with the Resurrection, not Crucifixion), and the reality of God’s death – God’s suicide, to be exact, vanishes behind the interminable ritual repetition of a mythical spectacle” (73). This misconstrues how atonement is supposed to be effectuated. Paul, Anselm, Athanasius are touchstones here, but no systematic Christian theologian dissociates the Atonement from the Crucifixion or argues that redemption only becomes possible with the Resurrection. The Crucifixion always entails the Resurrection, and the Resurrection always implies the Crucifixion, and they always work together to accomplish salvation. Certainly in the doctrine of Atonement there are relative degrees of emphasis between the Western and Eastern Churches, and between Protestantism, Catholicism, and Eastern Orthodoxy. In Eastern Orthodoxy, there are many more icons of the Resurrection, as there is a greater stress on deification, or theosis, in the teachings of the Byzantine and Russian churches. It is interesting, further, to compare the iconographical emphasis of the Orthodox (focus on the risen and transfigured Christ, as in the Pantocrator icon) versus Catholics (focus on the suffering and broken Christ) versus Protestants (typically, an empty cross, which combines the meanings of both the former). Nonetheless, in each tradition, soteriology depends on the joint significance of the Crucifixion and the Resurrection: they work in tandem, never in isolation or separated by time. I have continued on this matter at some length not because it undermines Gourgouris’s case for atheism – it does not – but because he handles Christian thought somewhat ham-fistedly. Occasionally, his animus is wittily abrasive, as in his hilariously irreverent description of Christ as a reanimated zombie; but he can ride roughshod over subtleties and sometimes make neglectful over-generalizations.

    This leads me to the second objection. Gourgouris opposes the Christian imaginary to the Greek in a manner that needlessly dualizes them and downplays the practice of religion among the ancient Greeks. Part of the problem here stems from Gourgouris’s tendency to celebrate what was thinkable in the Greek imaginary versus what is typical of the Christian imaginary. The “thinkable” here is an idea that I am interpolating from Castoriadis, whose own reflections on the ancient Greeks are clearly an influence on Gourgouris. Put baldly, the thinkable refers to what is possible to formulate and speak out of a social imaginary at given point and time in its history. The thinkable need not be typical and, indeed, may be inassimilable to conventional, inherited thought. The Christian imaginary Gourgouris sees in broad strokes: the mystification of authority, the darkening of antiquity, the denial of death, heteronomous dogma. In the Greek imaginary, contrastingly, Gourgouris finds the capacity, not everywhere actualized but available, for wonder, lucidity, democracy, and autonomy. This sampling of the ancient Greeks accentuates their modernity, but it occludes quite a bit that would destabilize Gourgouris’ binary of enlightened Greek versus regressive Christian. As E. R. Dodds reminded us some time ago in his classic, The Greeks and the Irrational, religion was robust even in the age of democracy and the great tragedians. Beliefs persisted in daemons, magic, soothsaying, oracles, and mystery cults. Animals were still sacrificed to the gods regularly as part of the civic calendar in Athens, and citizens made use of sacred images in public places of worship. Festivals, prayers, and processions still took place. Despite secularization among the philosophes, new religions like Orphism and Pythagoreanism developed in the 4th century, and Socrates was executed, among other reasons, for impiety. Or does Gourgouris limit his version of the Greek imaginary to the elements of modernity in the Classical Age and the Ionian Enlightenment?

    The answer comes indirectly through Chapter Four, which connects poiesis and autonomy, themes of chapters 1-3, to ontology and politics, which will cascade into the book’s fifth and sixth chapters. The modernity of the Greek imaginary lies not in its rationalism, but in the polis and in the arts, where autonomy was a self-consciousness project. The project did not require the disenchantment of myth, as superstition or error, so much as its appropriation for poetic self-creation, as Gourgouris makes clear in Does Literature Think? With threads to this earlier book, Chapters Four and Five of Lessons in Secular Criticism, “Confronting Heteronomy” and “The Void Occupied Unconcealed,” go to a fascinating place conceptually, a rethinking of idolatry that extends its domain to transcendence, even if Gourgouris gets the reader there by way of a disputable theory about the operation of myth on the Athenian stage. The claims that he makes for an expanded sense of idolatry, as distinguished from myth, prepare for the criticism that he mounts of socialist philosopher Claude Lefort’s famous essay on democracy, “The Persistence of the Theologico-Political?” (1980).

    In Gourgouris’s reading of classical Greek theater, myths were not only the narrative sources for tragedy, but also the stuff for mythographic reflection performed by the dramas. Myth, as he describes it in Does Literature Think? (2003), was a material means for Greek dramatists and their public audiences to reflect on the groundlessness of human creation (the making and unmaking of forms in history) where there is no divine anthropogony to teleologize nomos. In “Confronting Heteronomy,” he imports Castoriadis’s ontology to describe what both take to be the Greeks’ insight into the chaos of Being against which humans generate their societies and authorize them. Being was, is, and always will be disunited (105). Its differentiation “permeates all existence and thus precipitates the conditions for human beings to realize that (1) there is a necessity for nomos, for otherwise life is defeated by its own meaninglessness; and (2) this necessity does not confine humans to a de facto subjugation to nomos because it opens the way for them to create meaning and the frameworks of meaning” (106). Societies, however always occlude the generative chaos against which humans give form to their lives. The sacred’s chief function, in fact, is to mask the chaos of Being. The sacred is fundamentally distinguished from mythic imagining as Gourgouris defines it in Does Literature Think? Whereas myth is metapoetic, the sacred is the ossification of myth and its fusion with religious authority. Whereas myth tarries fearlessly with non-being as it produces figures of self-othering, the sacred throws up idols. Gourgouris does not except iconoclastic monotheisms from the accusation of idolatry; the more transcendental the image of the divine, the more cunning an idol it is. A complete image ban still produces an idol because its transforms non-representability into a sign of a latent absolute. To conceptualize idolatry this way is to sap the power of both blasphemy and iconoclasm as these have been practiced in Islam, Judaism, and Christianity. Monotheistic religions authenticate themselves by producing counter-sacreds whose images they can then desacralize. Applying Gourgouris’ logic, they are deflecting from their own cores of idolatry: in the religion of the heretic, they show the chaos of Being in order to make necessary the transcendental structure that conceals it again.

    Nationalism and statism are also forms of idolatry that certify themselves with religious motifs and images. In turning to Lefort’s widely cited 1980 essay, Gourgouris intends to rescue its insights into the groundlessness of democracy while criticizing its pessimistic account of secularization. Gourgouris’s goal is to stave off post-secularist agendas that have seized on “The Persistence of the Theologico-Political?” – just as they have Nazi jurist Carl Schmitt’s Political Theology – to delineate a theological desire within democracy that yearns for the symbolic structure of Christianity. Lefort observes a rupture between democratic political imaginaries and those of pre-modern Europe. In the latter, the state was symbolized by the king, a God-man having two bodies, one earthly/mortal and the other supernatural/immortal. In this corporatist representation, the state was embodied as the sovereign One: an infallible, omnipotent unity transcending the political subjects who die for it. The theological analog of this symbolism, of course, was the Incarnation. Democracies cannot sustain the corporatist representation of the state since the dēmos – the multitude – is sovereign and the autonomous practice of democratic politics decenters power, institutionalizes conflict. In the revolutionary moment, the markers of unity and certainty in the old imaginary dissolve, leaving democracy poised generatively upon the void between the real and the timeless One, which is now seen for the phantasm that it always was. Gourgouris affirms Lefort’s central insight that democracy “is the historical regime whose radical characteristic is to stage its internal conflicts openly for itself” in a space of power that is denuded of “the symbolic constitution of authority because, quite literally, there is no body in power” (Lessons 132). However, he objects when Lefort tries to explain why post-revolutionary societies revert to some form of the pre-modern political imaginary, in which power is once again authenticated by its mediating relation, in the body of the One, to a ground externalized as something sacred or metaphysical. According to Lefort, the tendency within democracies to become fissiparous and the horror of the void itself bring about a crisis that partially re-sacralizes politics: “Lefort seems to entertain the idea of a sort of recurrent desecularization, a sort of reincarnation of the religious in the midst of the void” (137, 138). In the West, the form these representational metempsychoses take is derivative from the Christian Incarnation, since this is the exemplary model from the past. Gourgouris intervenes here to say that what Lefort describes is not the recovery of any specifically Christian content. It is simply a reversion to idolatry, the old desire to conceal the “condition of radical uncertainty” that is our human lot (140). In place of the idol of the One, he proposes a continual disruption of symbolic representation in favor of “the uninterrupted visibility of the dēmos,” revealed again and again in all of its “multiplicity” and “internal antagonism” (143).

    Refugees in front of the ruins of the temple of Theseus (1922)
    Refugees in front of the ruins of the temple of Theseus (1922)

    Gourgouris thus calls for a poetic intervention in the symbolic field that will alter inherited political imaginaries so that the dēmos can see and reflect on its self-constitutive role, its struggle internally to find a political ground for renewed consent to the law that it gives itself in an undetermined historical process. To construct and sustain the form of “governmentality” that Gourgouris here imagines would require not only novel institutions but also the reconfiguration of mass media technologies and an end to entrenched patterns of consumer addiction. He follows the articulation of this mammoth task with a sixth chapter, “Responding to the Deregulation of the Political,” that moves from the analysis of post-secularism to a meditation on the promise of the recent global assembly movements, such as Occupy, the Arab Spring in the Middle East, and the Indignant Citizens Movement in Spain and Greece. These groups, we are to understand, enact the politics of secular criticism through their withdrawal of consent to neo-liberal capital and their demand instead for direct democracy.
    Gourgouris’s hopeful speculations on the world movement for democracy return the text to his advocacy for “a politics of wonder,” a new politics combining skepticism and utopia for which atheism (as he defines it in Chapter 3) is best-fitted (Lessons 83). Crucially, Gourgouris’s atheism imagines its own obsolescence at a point beyond which the question of belief and quarrels over the secular versus the religious will have become irrelevant to the ways that people live with each other. In the meantime, however, it aims, in the mode of secular critique, to overthrow both the sacred of religion and dogmatic appeals to Reason in order to attack heteronomy in every guise. Only autonomy (as critique, poiesis, law-making, and self-instituting imaginary) can produce democracy as yet untried. Though Gourgouris, to his great credit, takes blinkered secularism as well as religion as threats to autonomy, I would like to turn, before closing, to his case that religion’s deference to divine power withers emancipatory politics.

    To review, Gourgouris argues that religion restricts decision-making to a command-obedience structure in which the believer defers to a heteronomous authority. This power might be embodied in a hieratic office or a disembodied, transcendent and unrepresentable. Although Gourgouris tends to speak of religion categorically, he seems to object particularly to Abrahamic monotheisms, in which the language of sovereign God and redeemed subject, whether taken metaphorically or literally, implies a horizon of non-questioning and fealty to belief. (One wonders how successfully Gourgouris could apply the command-obedience model to polytheistic religions, like Shinto or Hindu, non-theistic religions like Buddhism, or pantheistic ones like Taoism.) Gourgouris does not exempt liberation theologies from his criticism of the command-obedience structure even though they may be aligned with populist or anti-imperialist movements. In a tributary of his quarrel with Saba Mahmood in Chapter Two, for example, he states: “I would never doubt, for instance, the revolutionary inspiration that liberation theology once gave to certain oppressed societies . . . . But as I have said several times, this does not mean that, come postinsurgency time, the time of self-determination, a politics based on religious command can institute modes of social autonomy – at least in known history this has never happened”(49-50). In the last instance, the religious “command” prevents people from seeing that they alone give authorization to their self-determination. Gourgouris follows this characterization with an arresting statement: “This is not to say, I repeat, that emancipatory politics cannot emerge from within a religious language. But it is to say that if it does, it must place this very language in question; it must deauthorize this language as command” (50). This remark, suggesting how religious language might revise itself to become viable for Gourgouris’s politics, comes as a surprise given the force of his secular convictions, but it is worth following up.

    Let’s take for example James Cone’s God of the Oppressed, a classic of liberation theology. I do not intend it to be representative of its tradition, but illustrative of the incoherence that emerges when old language is unimaginatively combined with a revolutionary-reform message. Jostling with each other, we see the following formulations: “Divine freedom . . . . expresses God’s will to be in relation to creatures in the social context of their striving for the fulfillment of humanity” (175); “[H]uman beings are free only when that freedom is grounded in divine revelation” (182); “God is the sovereign ruler and nothing can thwart God’s will to liberate the oppressed”(196). On the one hand, Cone describes God entering history to strive alongside the poor and the disenfranchised in their struggle with entrenched, monopolized power and its ideology; God joins in all aspects of this conflict, which entails a prophetic critique of Christendom’s complicity in racism and social inequality. On the other hand, God is pictured as an omnipotent sovereign who controls providential history and on whom human freedom depends for its realization. Gourgouris might quarrel with both sides of Cone’s formulation, but he would most certainly object to the second, and rightly so given his premises. The self-interrogative act of self-determination is seemingly annulled by language that places sovereignty with God, here an absolute power that transcends the merely earthly powers of the oppressor. One could say apologetically that Cone is simply using inherited biblical language as inspired rhetoric to buttress an unswerving ethical commitment, but this rhetorical reading not only naturalizes what is supernatural in Cone’s text, it also preserves the objectionable notion that commitment (in this case, to justice) requires certainty of such sustained subjective intensity that, if necessary, it should be produced by belief in an unassailable authority. It is precisely the power to generate “subjective normative intensities,” or the Jamesian “will-to-believe,” that fashionably anti-liberal critics like Stanley Fish prize in religions and find lacking in “weak” or “indifferent” secularism. However, the religious command, in producing the strong, insistent form of belief that seems so attractive to those who see uncertainty as an impediment to commitment, can also become a mechanism for silencing internal dissent and steeling belief in the urgency of the belief. Such a mindset one can hardly imagine coping with the social heterogeneity that any democratic politics worthy of the name must include in its reflection.

    I am not convinced, as Gourgouris seems to be, that monotheisms always produce the heteronomous subject that I have just described, but history indicates that the second is highly correlated with the first, especially when the religion – be it Christian, Jewish, or Muslim in identity – draws its impetus from the refusal of modernization. Taking seriously the impediments to autonomy that Gourgouris finds in the mindset fostered by (monotheistic) religious language, it is worth, for the sake of secular criticism, opening a conversation with theologies that have intentionally weakened the modeling of the divine and human relationship on sovereign-to-subject. There is the rich yet unfairly maligned tradition of theological modernism, which augmented certain trends in religious liberalism toward immanence. Contemporary with the end of the modernist movement, Dietrich Bonhoeffer, a pastor of the anti-Nazi Confessing Church in Germany, spent the months in prison prior to his execution writing about “a world come of age,” a world in which man had won “autonomy,” a world that did not need false religious obligations or inhibitions, that did not need a God conceived as the beyond of our cognitive faculties. The kind of command-obedience structure that Gourgouris calls heteronomous Bonhoeffer, in his prison letters, denounces as “phariseeism” and “religious methodism” (Letters 362). Cognizant also of the authoritarian impulses in his own religious tradition, Bonhoeffer feared the cultural temptation in the West to make a leap back toward “the heteronomy” of the Middle Ages (Letters 360). Rather than submit to this temptation himself, Bonhoeffer stresses in the letters not God as “sovereign” but God as “sufferer,” for only this God could enter a world that no longer had need of an omnipotent being that explains everything and wills everything (361). More recently, the varieties of “process theology,” “weak theology,” “secular theology,” and “a/theology” represented in the figures of David Ray Griffin, James Cobb, James Caputo, and Mark C. Taylor have worked in distinct ways to enlarge space for human agency and response while smashing as idols religious and metaphysical certitudes. Influenced by the ontology of Alfred North Whitehead, Griffin and Cobb deny divine perfection and truth, and emphasize God’s temporality as well as man’s. Bridging the post-liberal theologies of Bonhoeffer and Paul Tillich (who famously urged his contemporaries to be unafraid to let “the God of theism” disappear “into the anxiety of doubt”) and Derridean post-structuralism’s sensitivity to contingency and context, Caputo defines God not as a person but as an ever breaking “event” that awakens human desire for something namelessly undeconstructible and always yet to come; this event relativizes all the logics and structures of the world, including those of religion. Taylor describes a nearly featureless God that animates networks of creative processes in nature and culture, structuring and de-structuring them according to “no mind or Logos,” but coming restlessly to consciousness in humans; this idea of the divine cannot be the object of faith, the metaphysical foundation of decision, or the limit to human interpretation (After God 346). Like Caputo, Taylor wants to transform the language of religion and not only attach old language to democratic causes. I should mention that some of these thinkers begin from premises (man as homo religiosus, the death of god as ongoing event, the spiritual underpinnings of secularism) against which Gourgouris has compellingly raised his voice, but they have shown greater capacity for dialogue, self-criticism, and nimbleness of thought than culturalist proponents of the post-secular.

    Modish attention to demographic trends pointing up the statistical vitality of religion should not guarantee respect for belief or earn providential auguries of religion’s imperishability. One hundred years from now our world may be substantially more secular than it is now and atheism a preferential option for most of the population. Yet, in our contemporary conjuncture, it would be unnecessary and perhaps even detrimental to exclude from one’s theorization of a new democratic politics the religious liberals, humanists, progressives, and liberationists who could be its allies in the struggle against “the scorched earth policies of global financial capitalism” (xviii). Though with deep reservations, Gourgouris hints that it might be possible for people with a variety of religious as well as secular philosophical views to work toward common political goals and values so long as they avoid heteronomous formulations of belief. His book would have benefitted from taking into account already existing resources in theology for weakening the sovereign-to-subject language of traditional god talk. Notwithstanding this omission and some distortions in his dualizing of Greeks and Christians, he makes an essential intervention in the post-secularism debates by pointing out, through a range of deft responses to key texts, the laziness of intellectuals’ defenses of religious self-righteousness and declarations of secularization’s failure. More incisively still, he exposes the fallacy of conflating secular criticism with institutionalized secularism, and of tethering the latter to theology. Anyone seeking to comprehend the high stakes in the so-called “turn to religion” will find Lessons in Secular Criticism a most bracing read.

    ________

    Jason Stevens has taught at Harvard University and the University of Maryland, Baltimore County, and he has been a fellow of the National Humanities Center (Durham, NC). His work focuses on mid-late 20th century American literature and U. S. cultural and intellectual history, with emphases on the intersections of fiction, popular culture, religion, and ethnicity. His first book was God-Fearing and Free: A Spiritual History of America’s Cold War (Harvard University Press 2010). His writings have also appeared in boundary 2, American Literature, Literature/Film Quarterly, and The Immanent Frame. In 2014-2015, he is a fellow at the Center for the Humanities, University of Pittsburgh, where he has been completing a book project on American film noir and making preparations for the international conference, “Protestantism on Screen” (Wittenberg, June 2015), of which he is co-sponsor.

    _______

    Notes

  • Dissecting the “Internet Freedom” Agenda

    Dissecting the “Internet Freedom” Agenda

    Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedoma review of Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedom  (University of Illinois Press, 2015)
    by Richard Hill
    ~
    Disclosure: the author of this review is thanked in the Preface of the book under review.

    Both radical civil society organizations and mainstream defenders of the status quo agree that the free and open Internet is threatened: see for example the Delhi Declaration, Bob Hinden’s 2014 Year End Thoughts, and Kathy Brown’s March 2015 statement at a UNESCO conference. The threats include government censorship and mass surveillance, but also the failure of governments to control rampant industry concentration and commercial exploitation of personal data, which increasingly takes the form of providing “free” services in exchange for personal information that is resold at a profit, or used to provide targeted advertising, also at a profit.

    In Digital Disconnect, Robert McChesney has explained how the Internet, which was supposed to be a force for the improvement of human rights and living conditions, has been used to erode privacy and to increase the concentration of economic power, to the point where it is becoming a threat to democracy. In Digital Depression, Dan Schiller has documented how US policies regarding the Internet have favored its geo-economic and geo-political goals, in particular the interests of its large private companies that dominate the information and communications technology (ICT) sector worldwide.

    Shawn M. Powers and Michael Jablonski’s seminal new book The Real Cyber War takes us further down the road of understanding what went wrong, and what might be done to correct the situation. Powers, an assistant professor at Georgia State University, specializes in international political communication, with particular attention to the geopolitics of information and information technologies. Jablonski is an attorney and presidential fellow, also at Georgia State.

    There is a vast literature on internet governance (see for example the bibliography in Radu, Chenou, and Weber, eds., The Evolution of Global Internet Governance), but much of it is ideological and normative: the author espouses a certain point of view, explains why that point of view is good, and proposes actions that would lead to the author’s desired outcome (a good example is Milton Mueller’s well researched but utopian Networks and States). There is nothing wrong with that approach: on the contrary, such advocacy is necessary and welcome.

    But a more detached analytical approach is also needed, and Powers and Jablonski provide exactly that. Their objective is to help us understand (citing from p. 19 of the paperback edition) “why states pursue the policies they do”. The book “focuses centrally on understanding the numerous ways in which power and control are exerted in cyberspace” (p. 19).

    Starting from the rather obvious premise that states compete to shape international policies that favor their interests, and using the framework of political economy, the authors outline the geopolitical stakes and show how questions of power, and not human rights, are the real drivers of much of the debate about Internet governance. They show how the United States has deliberately used a human rights discourse to promote policies that further its geo-economic and geo-political interests. And how it has used subsidies and government contracts to help its private companies to acquire or maintain dominant positions in much of the ICT sector.

    Jacob Silverman has decried the “the misguided belief that once power is arrogated away from doddering governmental institutions, it will somehow find itself in the hands of ordinary people”. Powers and Jablonski dissect the mechanisms by which vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.

    In particular, they show how a “freedom to connect” narrative is used by the USA to attempt to transform information and personal data into commercial commodities that should be subject to free trade. Yet all states (including the US) regulate, at least to some extent, the flow of information within and across their borders. If information is the “new oil” of our times, then it is not surprising that states wish to shape the production and flow of information in ways that favor their interests. Thus it is not surprising that states such as China, India, and Russia have started to assert sovereign rights to control some aspect of the production and flow of information within their borders, and that European Union courts have made decisions on the basis of European law that affect global information flows and access.

    As the authors put the matter (p. 6): “the [US] doctrine of internet freedom … is the realization of a broader [US] strategy promoting a particular conception of networked communication that depends on American companies …, supports Western norms …, and promotes Western products.” (I would personally say that it actually supports US norms and US products and services.) As the authors point out, one can ask (p. 11): “If states have a right to control the types of people allowed into their territory (immigration), and how its money is exchanged with foreign banks, then why don’t they have a right to control information flows from foreign actors?”

    To be sure, any such controls would have to comply with international human rights law. But the current US policies go much further, implying that those human rights laws must be implemented in accordance with the US interpretation, meaning few restrictions on freedom of speech, weak protection of privacy, and ever stricter protection for intellectual property. As Powers and Jablonski point out (p. 31), the US does not hesitate to promote restrictions on information flows when that promotes its goals.

    Again, the authors do not make value judgments: they explain in Chapter 1 how the US deliberately attempts to shape (to a large extent successfully) international policies, so that both actions and inactions serve its interests and those of the large corporations that increasingly influence US policies.

    The authors then explain how the US military-industrial complex has morphed into an information-industrial complex, with deleterious consequences for both industry and government, consequences such as “weakened oversight, accountability, and industry vitality and competitiveness”(p. 23) that create risks for society and democracy. As the authors say, the shift “from adversarial to cooperative and laissez-faire rule making is a keystone moment in the rise of the information-industrial complex” (p. 61).

    As a specific example, they focus on Google, showing how it (largely successfully) aims to control and dominate all aspects of the data market, from production, through extraction, refinement, infrastructure and demand. A chapter is devoted to the economics of internet connectivity, showing how US internet policy is basically about getting the largest number of people online, so that US companies can extract ever greater profits from the resulting data flows. They show how the network effects, economies of scale, and externalities that are fundamental features of the internet favor first-movers, which are mostly US companies.

    The remedy to such situations is well known: government intervention: widely accepted regarding air transport, road transport, pharmaceuticals, etc., and yet unthinkable for many regarding the internet. But why? As the authors put the matter (p. 24): “While heavy-handed government controls over the internet should be resisted, so should a system whereby internet connectivity requires the systematic transfer of wealth from the developing world to the developed.” But freedom of information is put forward to justify specific economic practices which would not be easy to justify otherwise, for example “no government taxes companies for data extraction or for data imports/exports, both of which are heavily regulated aspects of markets exchanging other valuable commodities”(p. 97).

    The authors show in detail how the so-called internet multi-stakeholder model of governance is dominated by insiders and used “under the veil of consensus’” (p. 136) to further US policies and corporations. A chapter is devoted to explaining how all states control, at least to some extent, information flows within their territories, and presents detailed studies of how four states (China, Egypt, Iran and the USA) have addressed the challenges of maintaining political control while respecting (or not) freedom of speech. The authors then turn to the very current topic of mass surveillance, and its relation to anonymity, showing how, when the US presents the internet and “freedom to connect” as analogous to public speech and town halls, it is deliberately arguing against anonymity and against privacy – and this of course in order to avoid restrictions on its mass surveillance activities.

    Thus the authors posit that there are tensions between the US call for “internet freedom” and other states’ calls for “information sovereignty”, and analyze the 2012 World Conference on International Telecommunications from that point of view.

    Not surprisingly, the authors conclude that international cooperation, recognizing the legitimate aspirations of all the world’s peoples, is the only proper way forward. As the authors put the matter (p. 206): “Activists and defenders of the original vision of the Web as a ‘fair and humane’ cyber-civilization need to avoid lofty ‘internet freedom’ declarations and instead champion specific reforms required to protect the values and practices they hold dear.” And it is with that in mind, as a counterweight to US and US-based corporate power, that a group of civil society organizations have launched the Internet Social Forum.

    Anybody who is seriously interested in the evolution of internet governance and its impact on society and democracy will enjoy reading this well researched book and its clear exposition of key facts. One can only hope that the Council of Europe will heed Powers and Jablonski’s advice and avoid adopting more resolutions such as the recent recommendation to member states by the EU Committee of Ministers, which merely pander to the US discourse and US power that Powers and Jablonski describe so aptly. And one can fondly hope that this book will help to inspire a change in course that will restore the internet to what it might become (and what many thought it was supposed to be): an engine for democracy and social and economic progress, justice, and equity.
    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Men (Still) Explain Technology to Me: Gender and Education Technology

    Men (Still) Explain Technology to Me: Gender and Education Technology

    By Audrey Watters
    ~

    Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)

    Men Explain Technology

    So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.

    Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.

    I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.

    And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.

    Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.

    I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.

    I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.

    The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.

    Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.

    Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.

    I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.

    I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.

    There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.

    Perhaps, yes.

    But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.

    Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.

    By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.

    And that right there is already a process of erasure, a different sort of mansplaining one might say.

    Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.

    Ada Lovelace
    Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)

    “Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)

    Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.

    A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).

    In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.

    Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.

    This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)

    • Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
    • 70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
    • 70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
    • 69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
    • 70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
    • Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
    • And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”

    This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.

    So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?

    And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.

    That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)

    Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)

    One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.

    What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,

    Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.

    How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?

    Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.

    You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.

    This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”

    Case provides some examples of templated selves:

    Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.

    As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?

    While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.

    Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?

    It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?

    The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.

    The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.

    The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

    And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.

    One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.

    That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.

    Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.

    Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • A Dark, Warped Reflection

    A Dark, Warped Reflection

    Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )a review of Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )
    by Zachary Loeb
    ~

    Depending upon which sections of the newspaper one reads, it is very easy to come away with two rather conflicting views of the future. If one begins the day by reading the headlines in the “International News” or “Environment” it is easy to feel overwhelmed by a sense of anxiety and impending doom; however, if one instead reads the sections devoted to “Business” or “Technology” it is easy to feel confident that there are brighter days ahead. We are promised that soon we shall live in wondrous “Smart” homes where all of our devices work together tirelessly to ensure our every need is met even while drones deliver our every desire even as we enjoy ever more immersive entertainment experiences with all of this providing plenty of wondrous investment opportunities…unless of course another economic collapse or climate change should spoil these fantasies. Though the juxtaposition between newspaper sections can be jarring an element of anxiety can generally be detected from one section to the next – even within the “technology” pages. After all, our devices may have filled our hours with apps and social networking sites, but this does not necessarily mean that they have left us more fulfilled. We have been supplied with all manner of answers, but this does not necessarily mean we had first asked any questions.

    [youtube https://www.youtube.com/watch?v=pimqGkBT6Ek&w=560&h=315]

    If you could remember everything, would you want to? If a cartoon bear lampooned the pointlessness of elections, would you vote for the bear? Would you participate in psychological torture, if the person being tortured was a criminal? What lengths would you turn to if you could not move-on from a loved one’s death? These are the types of questions posed by the British television program Black Mirror, wherein anxiety about the technologically riddled future, be it the far future or next week, is the core concern. The paranoid pessimism of this science-fiction anthology program is not a result of a fear of the other or of panic at the prospect of nuclear annihilation – but is instead shaped by nervousness at the way we have become strangers to ourselves. There are no alien invaders, occult phenomena, nor is there a suit wearing narrator who makes sure that the viewers understand the moral of each story. Instead what Black Mirror presents is dread – it holds up a “black mirror” (think of any electronic device when the power on the screen is off) to society and refuses to flinch at the reflection.

    Granted, this does not mean that those viewing the program will not flinch.

    [And Now A Brief Digression]

    Before this analysis goes any further it seems worthwhile to pause and make a few things clear. Firstly, and perhaps most importantly, the intention here is not to pass a definitive judgment on the quality of Black Mirror. While there are certainly arguments that can be made regarding how “this episode was better than that one” – this is not the concern here. Nor for that matter is the goal to scoff derisively at Black Mirror and simply dismiss of it – the episodes are well written, interestingly directed, and strongly acted. Indeed, that the program can lead to discussion and introspection is perhaps the highest praise that one can bestow upon a piece of widely disseminated popular culture. Secondly, and perhaps even more importantly (depending on your opinion), some of the episodes of Black Mirror rely upon twists and surprises in order to have their full impact upon the viewer. Oftentimes people find it highly frustrating to have these moments revealed to them ahead of time, and thus – in the name of fairness – let this serve as an official “spoiler warning.” The plots of each episode will not be discussed in minute detail in what follows – as the intent here is to consider broader themes and problems – but if you hate “spoilers” you should consider yourself warned.

    [Digression Ends]

    The problem posed by Black Mirror is that in building nervous narratives about the technological tomorrow the program winds up replicating many of the shortcomings of contemporary discussions around technology. Shortcomings that make such an unpleasant future seem all the more plausible. While Black Mirror may resist the obvious morality plays of a show like The Twilight Zone, the moral of the episodes may be far less oppositional than they at first seem. The program draws much of its emotional heft by narrowly focusing its stories upon specific individuals, but in so doing the show may function as a sort of precognitive “usage manual,” one that advises “if a day should arrive when you can technologically remember everything…don’t be like the guy in this episode.” The episodes of Black Mirror may call upon viewers to look askance at the future it portrays, but it also encourages the sort of droll inured acceptance that is characteristic of the people in each episode of the program. Black Mirror is a sleek, hip, piece of entertainment, another installment in the contemporary “golden age of television” wherein it risks becoming just another program that can be streamed onto any of a person’s black mirror like screens. The program is itself very much a part of the same culture industry of the YouTube and Twitter era that the show seems to vilify – it is ready made for “binge watching.” The program may be disturbing, but its indictments are soft – allowing viewers a distance that permits them to say aloud “I would never do that” even as they are subconsciously unsure.

    Thus, Black Mirror appears as a sort of tragic confirmation of the continuing validity of Jacques Ellul’s comment:

    “One cannot but marvel at an organization which provides the antidote as it distills the poison.” (Ellul, 378)

    For the tales that are spun out in horrifying (or at least discomforting) detail on Black Mirror may appear to be a salve for contemporary society’s technological trajectory – but the show is also a ready made product for the very age that it is critiquing. A salve that does not solve anything, a cultural shock absorber that allows viewers to endure the next wave of shocks. It is a program that demands viewers break away from their attachment to their black mirrors even as it encourages them to watch another episode of Black Mirror. This is not to claim that the show lacks value as a critique; however, the show is less a radical indictment than some may be tempted to give it credit for being. The discomfort people experience while watching the show easily becomes a masochistic penance that allows people to continue walking down the path to the futures outlined in the show. Black Mirror provides the antidote, but it also distills the poison.

    That, however, may be the point.

    [Interrogation 1: Who Bears Responsibility?]

    Technology is, of course, everywhere in Black Mirror – in many episodes it as much of a character as the humans who are trying to come to terms with what the particular device means. In some episodes (“The National Anthem” or “The Waldo Moment”) the technologies that feature prominently are those that would be quite familiar to contemporary viewers: social media platforms like YouTube, Twitter, Facebook and the like. Whilst in other episodes (“The Complete History of You,” “White Bear” and “Be Right Back”) the technologies on display are new and different: an implantable device that records (and can play back) all of one’s memories, something that can induce temporary amnesia, a company that has developed a being that is an impressive mix of robotics and cloning. The stories that are told in Black Mirror, as was mentioned earlier, focus largely on the tales of individuals – “Be Right Back” is primarily about one person’s grief – and though this is a powerful story-telling device (and lest there be any confusion – many of these are very powerfully told stories) one of the questions that lingers unanswered in the background of many of these episodes is: who is behind these technologies?

    In fairness, Black Mirror would likely lose some of its effectiveness in terms of impact if it were to delve deeply into this question. If “The Complete History of You” provided a sci-fi faux-documentary foray into the company that had produced the memory recording “grains” it would probably not have felt as disturbing as the tale of abuse, sex, violence and obsession that the episode actually presents. Similarly, the piece of science-fiction grade technology upon which “White Bear” relies, functions well in the episode precisely because the key device makes only a rather brief appearance. And yet here an interesting contrast emerges between the episodes set in, or closely around, the present and those that are set further down the timeline – for in the episodes that rely on platforms like YouTube, the viewer technically knows who the interests are behind the various platforms. The episode “The Complete History of You” may be intensely disturbing, but what company was it that developed and brought the “grains” to market? What biotechnology firm supplies the grieving spouse in “Be Right Back” with the robotic/clone of her deceased husband? Who gathers the information from these devices? Where does that information live? Who is profiting? These are important questions that go unanswered, largely because they go unasked.

    Of course, it can be simple to disregard these questions. Dwelling upon them certainly does take something away from the individual episodes and such focus diminishes the entertainment quality of Black Mirror. This is fundamentally why it is so essential to insist that these critical questions be asked. The worlds depicted in episodes of Black Mirror did not “just happen” but are instead a result of layers upon layers of decisions and choices that have wound up shaping these characters lives – and it is questionable how much say any of these characters had in these decisions. This is shown in stark relief in “The National Anthem” in which a befuddled prime minister cannot come to grips with the way that a threat uploaded to YouTube along with shifts in public opinion, as reflected on Twitter, has come to require him to commit a grotesque act; his despair at what he is being compelled to do is a reflection of the new world of politics created by social media. In some ways it is tempting to treat episodes like “The Complete History of You” and “Be Right Back” as retorts to an unflagging adoration for “innovation,” “disruption,” and “permissionless innovation” – for the episodes can be read as a warning that just because we can record and remember everything, does not necessarily mean that we should. And yet the presence of such a cultural warning does not mean that such devices will not eventually be brought to market. The denizens of the worlds of Black Mirror are depicted as being at the mercy of the technological current.

    Thus, and here is where the problem truly emerges, the episodes can be treated as simple warnings that state “well, don’t be like this person.” After all, the world of “The Complete History of You” seems to be filled with people who – unlike the obsessive main character – can use the “grain” productively; on a similar note it can be easy to imagine many people pointing to “Be Right Back” and saying that the idea of a robotic/clone could be wonderful – just don’t use it to replicate the recently dead; and of course any criticism of social media in “The Waldo Moment” or “The National Anthem” can be met with a retort regarding a blossoming of free expression and the ways in which such platforms can help bolster new protest movements. And yet, similar to the sad protagonist in the film Her, the characters in the story lines of Black Mirror rarely appear as active agents in relation to technology even when they are depicted as truly “choosing” a given device. Rather they have simply been reduced to consumers – whether they are consumers of social media, political campaigns, or an amusement park where the “show” is a person being psychologically tortured day after day.

    This is not to claim that there should be an Apple or Google logo prominently displayed on the “grain” or on the side of the stationary bikes in “Fifteen Million Merits,” nor is it to argue that the people behind these devices should be depicted as cackling corporate monsters – but it would be helpful to have at least some image of the people behind these devices. After all, there are people behind these devices. What were they thinking? Were they not aware of these potential risks? Did they not care? Who bears responsibility? In focusing on the small scale human stories Black Mirror ignores the fact that there is another all too human story behind all of these technologies. Thus what the program riskily replicates is a sort of technological determinism that seems to have nestled itself into the way that people talk about technology these days – a sentiment in which people have no choice but to accept (and buy) what technology firms are selling them. It is not so much, to borrow a line from Star Trek, that “resistance is futile” as that nobody seems to have even considered resistance to be an option in the first place. Granted, we have seen in the not too distant past that such a sentiment is simply not true – Google Glass was once presented as inevitable but public push-back helped lead to Google (at least temporarily) shelving the device. Alas, one of the most effective ways of convincing people that they are powerless to resist is by bludgeoning them with cultural products that tell them they are powerless to resist. Or better yet, convince them that they will actually like being “assimilated.”

    Therefore, the key thing to mull over after watching an episode of Black Mirror is not what is presented in the episode but what has been left out. Viewers need to ask the questions the show does not present: who is behind these technologies? What decisions have led to the societal acceptance of these technologies? Did anybody offer resistance to these new technologies? The “6 Questions to Ask of New Technology” posed by media theorist Neil Postman may be of use for these purposes, as might some of the questions posed in Riddled With Questions. The emphasis here is to point out that a danger of Black Mirror is that the viewer winds up being just like one of the characters : a person who simply accepts the technologically wrought world in which they are living without questioning those responsible and without thinking that opposition is possible.

    [Interrogation 2: Utopia Unhinged is not a Dystopia]

    “Dystopia” is a term that has become a fairly prominent feature in popular entertainment today. Bookshelves are filled with tales of doomed futures and many of these titles (particularly those aimed at the “young adult” audience) have a tendency to eventually reach the screens of the cinema. Of course, apocalyptic visions of the future are not limited to the big screen – as numerous television programs attest. For many, it is tempting to use terms such as “dystopia” when discussing the futures portrayed in Black Mirror and yet the usage of such a term seems rather misleading. True, at least one episode (“Fifteen Million Merits”) is clearly meant to evoke a dystopian far future, but to use that term in relation to many of the other installments seems a bit hyperbolic. After all, “The Waldo Moment” could be set tomorrow and frankly “The National Anthem” could have been set yesterday. To say that Black Mirror is a dystopian show risks taking an overly simplistic stance towards technology in the present as well as towards technology in the future – if the claim is that the show is thoroughly dystopian than how does one account for the episodes that may as well be set in the present? One can argue that the state of the present world is far less than ideal, one can cast a withering gaze in the direction of social media, one can truly believe that the current trajectory (if not altered) will lead in a negative direction…and yet one can believe all of these things and still resist the urge to label contemporary society a dystopia. Doom saying can be an enjoyably nihilistic way to pass an afternoon, but it makes for a rather poor critique.

    It may be that what Black Mirror shows is how a dystopia can actually be a private hell instead of a societal one (which would certainly seem true of “White Bear” or “The Complete History of You”), or perhaps what Black Mirror indicates is that a derailed utopia is not automatically a dystopia. Granted, a major criticism of Black Mirror could emphasize that the show has a decidedly “industrialized world/Western world” focus – we do not see the factories where “grains” are manufactured and the varieties of new smart phones seen in the program suggest that the e-waste must be piling up somewhere. In other words – the derailed utopia of some could still be an outright dystopia for countless others. That the characters in Black Mirror do not seem particularly concerned with who assembled their devices is, alas, a feature all too characteristic of technology users today. Nevertheless, to restate the problem, the issue is not so much the threat of dystopia as it is the continued failure of humanity to use its impressive technological ingenuity to bring about a utopia (or even something “better” than the present). In some ways this provides an echo of Lewis Mumford’s comment, in The Story of Utopias, that:

    “it would be so easy, this business of making over the world if it were only a matter of creating machinery.” (Mumford, 175)

    True, the worlds of Black Mirror, including the ones depicting the world of today, show that “creating machinery” actually is an easy way “of making over the world” – however this does not automatically push things in the utopian direction for which Mumford was pining. Instead what is on display is another installment of the deferred potential of technology.

    The term “another” is not used incidentally here, but is specifically meant to point to the fact that it is nothing new for people to see technology as a source for hope…and then to woefully recognize the way in which such hopes have been dashed time and again. Such a sentiment is visible in much of Walter Benjamin’s writing about technology – writing, as he was, after the mechanized destruction of WWI and on the eve of the technologically enhanced barbarity of WWII. In Benjamin’s essay “Eduard Fuchs, Collector and Historian ” he criticizes a strain in positivist/social democratic thinking that had emphasized that technological developments would automatically usher in a more just world, when in fact such attitudes woefully failed to appreciate the scale of the dangers. This leads Benjamin to note:

    “A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the past century: the bungled reception of technology. The process has consisted of a series of energetic, constantly renewed efforts, all attempting to overcome the fact that technology serves this society only by producing commodities.” (Benjamin, 266)

    The century about which Benjamin was writing was not the twenty-first century, and yet these comments about “the bungled reception of technology” and technology which “serves this society only be producing commodities” seems a rather accurate description of the worlds depicted by Black Mirror. And yes, that certainly includes the episodes that are closer to our own day. The point of pulling out this tension; however, is to emphasize not the dystopian element of Black Mirror but to point to the “bungled reception” that is so clearly on display in the program – and by extension in the present day.

    What Black Mirror shows in episode after episode (even in the clearly dystopian one) is the gloomy juxtaposition between what humanity can possibly achieve and what it actually achieves. The tools that could widen democratic participation can be used to allow a cartoon bear to run as a stunt candidate, the devices that allow us to remember the past can ruin the present by keeping us constantly replaying our memories yesterday, the things that can allow us to connect can make it so that we are unable to ever let go – “energetic, constantly renewed efforts” that all wind up simply “producing commodities.” Indeed, in a tragic-comic turn, Black Mirror demonstrates that amongst the commodities we continue to produce are those that elevate the “bungled reception of technology” to the level of a widely watched and critically lauded television serial.

    The future depicted by Black Mirror may be startling, disheartening and quite depressing, but (except in the cases where the content is explicitly dystopian) it is worth bearing in mind that there is an important difference between dystopia and a world of people living amidst the continued “bungled reception of technology.” Are the people in “The National Anthem” paving the way for “White Bear” and in turn setting the stage for “Fifteen Million Merits?” It is quite possible. But this does not mean that the “reception of technology” must always be “bungled” – though changing our reception of it may require altering our attitude towards it. Here Black Mirror repeats its problematic thrust, for it does not highlight resistance but emphasizes the very attitudes that have “bungled” the reception and which continue to bungle the reception. Though “Fifteen Million Merits” does feature a character engaging in a brave act of rebellion, this act is immediately used to strengthen the very forces against which the character is rebelling – and thus the episode repeats the refrain “don’t bother resisting, it’s too late anyways.” This is not to suggest that one should focus all one’s hopes upon a farfetched utopian notion, or put faith in a sense of “hope” that is not linked to reality, nor does it mean that one should don sackcloth and begin mourning. Dystopias are cheap these days, but so are the fake utopian dreams that promise a world in which somehow technology will solve all of our problems. And yet, it is worth bearing in mind another comment from Mumford regarding the possibility of utopia:

    “we cannot ignore our utopias. They exist in the same way that north and south exist; if we are not familiar with their classical statements we at least know them as they spring to life each day in our minds. We can never reach the points of the compass; and so no doubt we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all.” (Mumford, 28/29)

    Black Mirror provides a stark portrait of the fake utopian lure that can lead us to the world to which we do not want to go – a world in which the “bungled reception of technology” continues to rule – but in staring horror struck at where we do not want to go we should not forget to ask where it is that we do want to go. The worlds of Black Mirror are steps in the wrong direction – so ask yourself: what would the steps in the right direction look like?

    [Final Interrogation – Permission to Panic]

    During “The Complete History of You” several characters enjoy a dinner party in which the topic of discussion eventually turns to the benefits and drawbacks of the memory recording “grains.” Many attitudes towards the “grains” are voiced – ranging from individuals who cannot imagine doing without the “grain” to a woman who has had hers violently removed and who has managed to adjust. While “The Complete History of You” focuses on an obsessed individual who cannot cope with a world in which everything can be remembered what the dinner party demonstrates is that the same world contains many people who can handle the “grains” just fine. The failed comedian who voices the cartoon bear in “The Waldo Moment” cannot understand why people are drawn to vote for the character he voices – but this does not stop many people from voting for the animated animal. Perhaps most disturbingly the woman at the center of “White Bear” cannot understand why she is followed by crowds filming her on their smart phones while she is hunted by masked assailants – but this does not stop those filming her from playing an active role in her torture. And so on…and so on…Black Mirror shows that in these horrific worlds, there are many people who are quite content with the new status quo. But that not everybody is despairing simply attests to Theodor Adorno and Max Horkheimer’s observation that:

    “A happy life in a world of horror is ignominiously refuted by the mere existence of that world. The latter therefore becomes the essence, the former negligible.” (Adorno and Horkheimer, 93)

    Black Mirror is a complex program, made all the more difficult to consider as the anthology character of the show makes each episode quite different in terms of the issues that it dwells upon. The attitudes towards technology and society that are subtly suggested in the various episodes are in line with the despairing aura that surrounds the various protagonists and antagonists of the episodes. Yet, insofar as Black Mirror advances an ethos it is one of inured acceptance – it is a satire that is both tragedy and comedy. The first episode of the program, “The National Anthem,” is an indictment of a society that cannot tear itself away from the horrors being depicted on screens in a television show that owes its success to keeping people transfixed to horrors being depicted on their screens. The show holds up a “black mirror” to society but what it shows is a world in which the tables are rigged and the audience has already lost – it is a magnificently troubling cultural product that attests to the way the culture industry can (to return to Ellul) provide the antidote even as it distills the poison. Or, to quote Adorno and Horkheimer again (swap out the word “filmgoers” with “tv viewers”):

    “The permanently hopeless situations which grind down filmgoers in daily life are transformed by their reproduction, in some unknown way, into a promise that they may continue to exist. The one needs only to become aware of one’s nullity, to subscribe to one’s own defeat, and one is already a party to it. Society is made up of the desperate and thus falls prey to rackets.” (Adorno and Horkheimer, 123)

    This is the danger of Black Mirror that it may accustom and inure its viewers to the ugly present it displays while preparing them to fall prey to the “bungled reception” of tomorrow – it inculcates the ethos of “one’s own defeat.” By showing worlds in which people are helpless to do anything much to challenge the technological society in which they have become cogs Black Mirror risks perpetuating the sense that the viewers are themselves cogs, that the viewers are themselves helpless. There is an uncomfortable kinship between the tv viewing characters of “The National Anthem” and the real world viewer of the episode “The National Anthem” – neither party can look away. Or, to put it more starkly: if you are unable to alter the future why not simply prepare yourself for it by watching more episodes of Black Mirror? At least that way you will know which characters not to imitate.

    And yet, despite these critiques, it would be unwise to fully disregard the program. It is easy to pull out comments from the likes of Ellul, Adorno, Horkheimer and Mumford that eviscerate a program such as Black Mirror but it may be more important to ask: given Black Mirror’s shortcomings, what value can the show still have? Here it is useful to recall a comment from Günther Anders (whose pessimism was on par with, or exceeded, any of the aforementioned thinkers) – he was referring in this comment to the works of Kafka, but the comment is still useful:

    “from great warnings we should be able to learn, and they should help us to teach others.” (Anders, 98)

    This is where Black Mirror can be useful, not as a series that people sit and watch, but as a piece of culture that leads people to put forth the questions that the show jumps over. At its best what Black Mirror provides is a space in which people can discuss their fears and anxieties about technology without worrying that somebody will, farcically, call them a “Luddite” for daring to have such concerns – and for this reason alone the show may be worthwhile. By highlighting the questions that go unanswered in Black Mirror we may be able to put forth the very queries that are rarely made about technology today. It is true that the reflections seen by staring into Black Mirror are dark, warped and unappealing – but such reflections are only worth something if they compel audiences to rethink their relationships to the black mirrored surfaces in their lives today and which may be in their lives tomorrow. After all, one can look into the mirror in order to see the dirt on one’s face or one can look in the mirror because of a narcissistic urge. The program certainly has the potential to provide a useful reflection, but as with the technology depicted in the show, it is all too easy for such a potential reception to be “bungled.”

    If we are spending too much time gazing at black mirrors, is the solution really to stare at Black Mirror?

    The show may be a satire, but if all people do is watch, then the joke is on the audience.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. Dialectic of Enlightenment: Philosophical Fragments. Stanford: Stanford University Press, 2002.
    • Anders, Günther. Franz Kafka. New York: Hilary House Publishers LTD, 1960.
    • Benjamin, Walter. Walter Benjamin: Selected Writings. Volume 3, 1935-1938. Cambridge: The Belknap Press, 2002.
    • Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.
    • Mumford, Lewis. The Story of Utopias. Bibliobazaar, 2008.
  • The Internet vs. Democracy

    The Internet vs. Democracy

    Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracya review of Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy  (The New Press, 2014)
    by Richard Hill
    ~
    Many of us have noticed that much of the news we read is the same, no matter which newspaper or web site we consult: they all seem to be recycling the same agency feeds. To understand why this is happening, there are few better analyses than the one developed by media scholar Robert McChesney in his most recent book, Digital Disconnect. McChesney is a Professor in the Department of Communication at the University of Illinois at Urbana-Champaign, specializing in the history and political economy of communications. He is the author or co-author of more than 20 books, among the best-known of which are The Endless Crisis: How Monopoly-Finance Capital Produces Stagnation and Upheaval from the USA to China (with John Bellamy Foster, 2012), The Political Economy of Media: Enduring Issues, Emerging Dilemmas (2008), Communication Revolution: Critical Junctures and the Future of Media (2007), and Rich Media, Poor Democracy: Communication Politics in Dubious Times (1999), and is co-founder of Free Press.

    Many see the internet as a powerful force for improvement of human rights, living conditions, the economy, rights of minorities, etc. And indeed, like many communications technologies, the internet has the potential to facilitate social improvements. But in reality the internet has recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities.

    One might have expected that democracies would have harnessed the internet to serve the interests of their citizens, as they largely did with other technologies such as roads, telegraphy, telephony, air transport, pharmaceuticals (even if they used these to serve only the interests of their own citizens and not the general interests of mankind).

    But this does not appear to be the case with respect to the internet: it is used largely to serve the interests of a few very wealthy individuals, or certain geo-economic and geo-political interests. As McChesney puts the matter: “It is supremely ironic that the internet, the much-ballyhooed champion of increased consumer power and cutthroat competition, has become one of the greatest generators of monopoly in economic history” (131 in the print edition). This trend to use technology to favor special interests, not the general interest, is not unique to the internet. As Josep Ramoneda puts the matter: “We expected that governments would submit markets to democracy and it turns out that what they do is adapt democracy to markets, that is, empty it little by little.”

    McChesney’s book explains why this is the case: despite its great promise and potential to increase democracy, various factors have turned the internet into a force that is actually destructive to democracy, and that favors special interests.

    McChesney reminds us what democracy is, citing Aristotle (53): “Democracy [is] when the indigent, and not the men of property are the rulers. If liberty and equality … are chiefly to be found in democracy, they will be best attained when all persons alike share in the government to the utmost.”

    He also cites US President Lincoln’s 1861 warning against despotism (55): “the effort to place capital on an equal footing with, if not above, labor in the structure of government.” According to McChesney, it was imperative for Lincoln that the wealthy not be permitted to have undue influence over the government.

    Yet what we see today in the internet is concentrated wealth in the form of large private companies that exert increasing influence over public policy matters, going to so far as to call openly for governance systems in which they have equal decision-making rights with the elected representatives of the people. Current internet governance mechanisms are celebrated as paragons of success, whereas in fact they have not been successful in achieving the social promise of the internet. And it has even been said that such systems need not be democratic.

    What sense does it make for the technology that was supposed to facilitate democracy to be governed in ways that are not democratic? It makes business sense, of course, in the sense of maximizing profits for shareholders.

    McChesney explains how profit-maximization in the excessively laissez-faire regime that is commonly called neoliberalism has resulted in increasing concentration of power and wealth, social inequality and, worse, erosion of the press, leading to erosion of democracy. Nowhere is this more clearly seen than in the US, which is the focus of McChesney’s book. Not only has the internet eroded democracy in the US, it is used by the US to further its geo-political goals; and, adding insult to injury, it is promoted as a means of furthering democracy. Of course it could and should do so, but unfortunately it does not, as McChesney explains.

    The book starts by noting the importance of the digital revolution and by summarizing the views of those who see it as an engine of good (the celebrants) versus those who point out its limitations and some of its negative effects (the skeptics). McChesney correctly notes that a proper analysis of the digital revolution must be grounded in political economy. Since the digital revolution is occurring in a capitalist system, it is necessarily conditioned by that system, and it necessarily influences that system.

    A chapter is devoted to explaining how and why capitalism does not equal democracy: on the contrary, capitalism can well erode democracy, the contemporary United States being a good example. To dig deeper into the issues, McChesney approaches the internet from the perspective of the political economy of communication. He shows how the internet has profoundly disrupted traditional media, and how, contrary to the rhetoric, it has reduced competition and choice – because the economies of scale and network effects of the new technologies inevitably favor concentration, to the point of creating natural monopolies (who is number two after Facebook? Or Twitter?).

    The book then documents how the initially non-commercial, publicly-subsidized internet was transformed into an eminently commercial, privately-owned capitalist institution, in the worst sense of “capitalist”: domination by large corporations, monopolistic markets, endless advertising, intense lobbying, and cronyism bordering on corruption.

    Having explained what happened in general, McChesney focuses on what happened to journalism and the media in particular. As we all know, it has been a disaster: nobody has yet found a viable business model for respectable online journalism. As McChesney correctly notes, vibrant journalism is a pre-condition for democracy: how can people make informed choices if they do not have access to valid information? The internet was supposed to broaden our sources of information. Sadly, it has not, for the reasons explained in detail in the book. Yet there is hope: McChesney provides concrete suggestions for how to deal with the issue, drawing on actual experiences in well functioning democracies in Europe.

    The book goes on to call for specific actions that would create a revolution in the digital revolution, bringing it back to its origins: by the people, for the people. McChesney’s proposed actions are consistent with those of certain civil society organizations, and will no doubt be taken up in the forthcoming Internet Social Forum, an initiative whose intent is precisely to revolutionize the digital revolution along the lines outlined by McChesney.

    Anybody who is aware of the many issues threatening the free and open internet, and democracy itself, will find much to reflect upon in Digital Disconnect, not just because of its well-researched and incisive analysis, but also because it provides concrete suggestions for how to address the issues.

    _____

    Richard Hill, an independent consultant based in Geneva, Switzerland, was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He frequently writes about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Race and the Poetic Avant-Garde

    by Dawn Lundy Martin

    The recent Boston Review issue on “Race and the Poetic Avant-Garde,” brings together a range of poets and scholars including Erica Hunt, Prageeta Sharma, Cathy Park Hong, Daniel Borzutsky, and Simone White–all of whom will appear in a special upcoming issue of boundary2 on “Race and Innovation”–to consider the long held cultural belief that “black” poetry and “avant-garde” poetry are necessarily in separate orbits.

    Both the Boston Review issue and the upcoming boundary2 issue find particular urgency in thinking through considerations of race and experimental poetics as the current controversy around Kenneth Goldsmith’s conceptual art piece (in which he reads the Michael Brown autopsy report) continues to raise questions about the black body, expendability, and how poets might speak in ways that refuse reproductions of race, gender, and class hierarchies. 

  • The Automatic Teacher

    The Automatic Teacher

    By Audrey Watters
    ~

    “For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926

    Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).

    It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

    Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”

    pressey_machines

    The Automatic Teacher

    Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:

    The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:

    To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.

    To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.

    The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.

    It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.

    A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:

    [youtube https://www.youtube.com/watch?v=n7OfEXWuulg?rel=0]

    Market Failure

    UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.

    In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.

    Petrina writes that,

    After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.

    ad_pressey

    The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.

    The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)

    The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.

    Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”

    But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”

    Indeed.

    Automation and Individualization

    In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that

    “Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”

    Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.

    But as Pretina argues, “the effect of automation was control and standardization.”

    The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.

    The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay