boundary 2

Tag: robotics

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay

  • Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    Frank Pasquale — To Replace or Respect: Futurology as if People Mattered

    a review of Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014)

    by Frank Pasquale

    ~

    Business futurism is a grim discipline. Workers must either adapt to the new economic realities, or be replaced by software. There is a “race between education and technology,” as two of Harvard’s most liberal economists insist. Managers should replace labor with machines that require neither breaks nor sick leave. Superstar talents can win outsize rewards in the new digital economy, as they now enjoy global reach, but they will replace thousands or millions of also-rans. Whatever can be automated, will be, as competitive pressures make fairly paid labor a luxury.

    Thankfully, Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age (2MA)  downplays these zero-sum tropes. Brynjolffson & McAfee (B&M) argue that the question of distribution of the gains from automation is just as important as the competitions for dominance it accelerates. 2MA invites readers to consider how societies will decide what type of bounty from automation they want, and what is wanted first.  The standard, supposedly neutral economic response (“whatever the people demand, via consumer sovereignty”) is unconvincing. As inequality accelerates, the top 5% (of income earners) do 35% of the consumption. The top 1% is responsible for an even more disproportionate share of investment. Its richest members can just as easily decide to accelerate the automation of the wealth defense industry as they can allocate money to robotic construction, transportation, or mining.

    A humane agenda for automation would prioritize innovations that complement (jobs that ought to be) fulfilling vocations, and substitute machines for dangerous or degrading work. Robotic meat-cutters make sense; robot day care is something to be far more cautious about. Most importantly, retarding automation that controls, stigmatizes, and cheats innocent people, or sets up arms races with zero productive gains, should be a much bigger part of public discussions of the role of machines and software in ordering human affairs.

    2MA may set the stage for such a human-centered automation agenda. Its diagnosis of the problem of rapid automation (described in Part I below) is compelling. Its normative principles (II) are eclectic and often humane. But its policy vision (III) is not up to the challenge of channeling and sequencing automation. This review offers an alternative, while acknowledging the prescience and insight of B&M’s work.

    I. Automation’s Discontents

    For B&M, the acceleration of automation ranks with the development of agriculture, or the industrial revolution, as one of the “big stories” of human history (10-12). They offer an account of the “bounty and spread” to come from automation. “Bounty” refers to the increasing “volume, variety, and velocity” of any imaginable service or good, thanks to its digital reproduction or simulation (via, say, 3-D printing or robots). “Spread” is “ever-bigger differences among people in economic success” that they believe to be just as much an “economic consequence” of automation as bounty.[1]

    2MA briskly describes various human workers recently replaced by computers.  The poor souls who once penned corporate earnings reports for newspapers? Some are now replaced by Narrative Science, which seamlessly integrates new data into ready-made templates (35). Concierges should watch out for Siri (65). Forecasters of all kinds (weather, home sales, stock prices) are being shoved aside by the verdicts of “big data” (68). “Quirky,” a startup, raised $90 million by splitting the work of making products between a “crowd” that “votes on submissions, conducts research, suggest improvements, names and brands products, and drives sales” (87), and Quirky itself, which “handles engineering, manufacturing, and distribution.” 3D printing might even disintermediate firms like Quirky (36).

    In short, 2MA presents a kaleidoscope of automation realities and opportunities. B&M skillfully describe the many ways automation both increases the “size of the pie,” economically, and concentrates the resulting bounty among the talented, the lucky, and the ruthless. B&M emphasize that automation is creeping up the value chain, potentially substituting machines for workers paid better than the average.

    What’s missing from the book are the new wave of conflicts that would arise if those at very top of the value chain (or, less charitably, the rent and tribute chain) were to be replaced by robots and algorithms. When BART workers went on strike, Silicon Valley worthies threatened to replace them with robots. But one could just as easily call for the venture capitalists to be replaced with algorithms. Indeed, one venture capital firm added an algorithm to its board in 2013.  Travis Kalanick, the CEO of Uber, responded to a question on driver wage demands by bringing up the prospect of robotic drivers. But given Uber’s multiple legal and PR fails in 2014, a robot would probably would have done a better job running the company than Kalanick.

    That’s not “crazy talk” of communistic visions along the lines of Marx’s “expropriate the expropriators,” or Chile’s failed Cybersyn.[2]  Thiel Fellow and computer programming prodigy Vitaly Bukherin has stated that automation of the top management functions at firms like Uber and AirBnB would be “trivially easy.”[3] Automating the automators may sound like a fantasy, but it is a natural outgrowth of mantras (e.g., “maximize shareholder value”) that are commonplaces among the corporate elite. To attract and retain the support of investors, a firm must obtain certain results, and the short-run paths to attaining them (such as cutting wages, or financial engineering) are increasingly narrow.  And in today’s investment environment of rampant short-termism, the short is often the only term there is.

    In the long run, a secure firm can tolerate experiments. Little wonder, then, that the largest firm at the cutting edge of automation—Google—has a secure near-monopoly in search advertising in numerous markets. As Peter Thiel points out in his recent From Zero to One, today’s capitalism rewards the best monopolist, not the best competitor. Indeed, even the Department of Justice’s Antitrust Division appeared to agree with Thiel in its 1995 guidelines on antitrust enforcement in innovation markets. It viewed intellectual property as a good monopoly, the rightful reward to innovators for developing a uniquely effective process or product. And its partner in federal antitrust enforcement, the Federal Trade Commission, has been remarkably quiescent in response to emerging data monopolies.

    II. Propertizing Data

    For B&M, intellectual property—or, at least, the returns accruing to intellectual insight or labor—plays a critical role in legitimating inequalities arising out of advanced technologies.  They argue that “in the future, ideas will be the real scarce inputs in the world—scarcer than both labor and capital—and the few who provide good ideas will reap huge rewards.”[4] But many of the leading examples of profitable automation are not “ideas” per se, or even particularly ingenious algorithms. They are brute force feats of pattern recognition: for example, Google’s studying past patterns of clicks to see what search results, and what ads, are personalized to delight and persuade each of its hundreds of millions of users. The critical advantage there is the data, not the skill in working with it.[5] Google will demur, but if they were really confident, they’d license the data to other firms, confident that others couldn’t best their algorithmic prowess.  They don’t, because the data is their critical, self-reinforcing advantage. It is a commonplace in big data literatures to say that the more data one has, the more valuable any piece of it becomes—something Googlers would agree with, as long as antitrust authorities aren’t within earshot.

    As sensors become more powerful and ubiquitous, feats of automated service provision and manufacture become more easily imaginable.  The Baxter robot, for example, merely needs to have a trainer show it how to move in order to ape the trainer’s own job. (One is reminded of the stories of US workers flying to India to train their replacements how to do their job, back in the day when outsourcing was the threat du jour to U.S. living standards.)

    how to train a robot
    How to train a Baxter robot. Image source: Inc. 

    From direct physical interaction with a robot, it is a short step to, say, programmed holographic or data-driven programming.  For example, a surveillance camera on a worker could, after a period of days, months, or years, potentially record every movement or statement of the worker, and replicate it, in response to whatever stimuli led to the prior movements or statements of the worker.

    B&M appear to assume that such data will be owned by the corporations that monitor their own workers.  For example, McDonalds could train a camera on every cook and cashier, then download the contents into robotic replicas. But it’s just as easy to imagine a legal regime where, say, workers’ rights to the data describing their movements would be their property, and firms would need to negotiate to purchase the rights to it.  If dance movements can be copyrighted, so too can the sweeps and wipes of a janitor. Consider, too, that the extraordinary advances in translation accomplished by programs like Google Translate are in part based on translations by humans of United Nations’ documents released into the public domain.[6] Had the translators’ work not been covered by “work-made-for-hire” or similar doctrines, they might well have kept their copyrights, and shared in the bounty now enjoyed by Google.[7]

    Of course, the creativity of translation may be greater than that displayed by a janitor or cashier. Copyright purists might thus reason that the merger doctrine denies copyrightability to the one best way (or small suite of ways) of doing something, since the idea of the movement and its expression cannot be separated. Grant that, and one could still imagine privacy laws giving workers the right to negotiate over how, and how pervasively, they are watched. There are myriad legal regimes governing, in minute detail, how information flows and who has control over it.

    I do not mean to appropriate here Jaron Lanier’s ideas about micropayments, promising as they may be in areas like music or journalism. A CEO could find some critical mass of stockers or cooks or cashiers to mimic even if those at 99% of stores demanded royalties for the work (of) being watched. But the flexibility of legal regimes of credit, control, and compensation is under-recognized. Living in a world where employers can simply record everything their employees do, or Google can simply copy every website that fails to adopt “robots.txt” protection, is not inevitable. Indeed, according to renowned intellectual property scholar Oren Bracha, Google had to “stand copyright on its head” to win that default.[8]

    Thus B&M are wise to acknowledge the contestability of value in the contemporary economy.  For example, they build on the work of MIT economists Daron Acemoglu and David Autor to demonstrate that “skill biased technical change” is a misleading moniker for trends in wage levels.  The “tasks that machines can do better than humans” are not always “low-skill” ones (139). There is a fair amount of play in the joints in the sequencing of automation: sometimes highly skilled workers get replaced before those with a less complex and difficult-to-learn repertoire of abilities.  B&M also show that the bounty predictably achieved via automation could compensate the “losers” (of jobs or other functions in society) in the transition to a more fully computerized society. By seriously considering the possibility of a basic income (232), they evince a moral sensibility light years ahead of the “devil-take-the-hindmost” school of cyberlibertarianism.

    III. Proposals for Reform

    Unfortunately, some of B&M’s other ideas for addressing the possibility of mass unemployment in the wake of automation are less than convincing.  They praise platforms like Lyft for providing new opportunities for work (244), perhaps forgetting that, earlier in the book, they described the imminent arrival of the self-driving car (14-15). Of course, one can imagine decades of tiered driving, where the wealthy get self-driving cars first, and car-less masses turn to the scrambling drivers of Uber and Lyft to catch rides. But such a future seems more likely to end in a deflationary spiral than  sustainable growth and equitable distribution of purchasing power. Like the generation traumatized by the Great Depression, millions subjected to reverse auctions for their labor power, forced to price themselves ever lower to beat back the bids of the technologically unemployed, are not going to be in a mood to spend. Learned helplessness, retrenchment, and miserliness are just as likely a consequence as buoyant “re-skilling” and self-reinvention.

    Thus B&M’s optimism about what they call the “peer economy” of platform-arranged production is unconvincing.  A premier platform of digital labor matching—Amazon’s Mechanical Turk—has occasionally driven down the wage for “human intelligence tasks” to a penny each. Scholars like Trebor Scholz and Miriam Cherry have discussed the sociological and legal implications of platforms that try to disclaim all responsibility for labor law or other regulations. Lilly Irani’s important review of 2MA shows just how corrosive platform capitalism has become. “With workers hidden in the technology, programmers can treat [them] like bits of code and continue to think of themselves as builders, not managers,” she observes in a cutting aside on the self-image of many “maker” enthusiasts.

    The “sharing economy” is a glidepath to precarity, accelerating the same fate for labor in general as “music sharing services” sealed for most musicians. The lived experience of many “TaskRabbits,” which B&M boast about using to make charts for their book, cautions against reliance on disintermediation as a key to opportunity in the new digital economy. Sarah Kessler describes making $1.94 an hour labeling images for a researcher who put the task for bid on Mturk.  The median active TaskRabbit in her neighborhood made $120 a week; Kessler cleared $11 an hour on her best day.

    Resistance is building, and may create fairer terms online.  For example, Irani has helped develop a “Turkopticon” to help Turkers rate and rank employers on the site. Both Scholz and Mike Konczal have proposed worker cooperatives as feasible alternatives to Uber, offering drivers both a fairer share of revenues, and more say in their conditions of work. But for now, the peer economy, as organized by Silicon Valley and start-ups, is not an encouraging alternative to traditional employment. It may, in fact, be worse.

    Therefore, I hope B&M are serious when they say “Wild Ideas [are] Welcomed” (245), and mention the following:

    • Provide vouchers for basic necessities. . . .
    • Create a national mutual fund distributing the ownership of capital widely and perhaps inalienably, providing a dividend stream to all citizens and assuring the capital returns do not become too highly concentrated.
    • Depression-era Civilian Conservation Corps to clean up the environment, build infrastructure.

    Speaking of the non-automatable, we could add the Works Progress Administration (WPA) to the CCC suggestion above.  Revalue the arts properly, and the transition may even add to GDP.

    Soyer, Artists on the WPA
    Moses Soyer, “Artists on WPA” (1935). Image source: Smithsonian American Art Museum

    Unfortunately, B&M distance themselves from the ideas, saying, “we include them not necessarily to endorse them, but instead to spur further thinking about what kinds of interventions will be necessary as machines continue to race ahead” (246).  That is problematic, on at least two levels.

    First, a sophisticated discussion of capital should be at the core of an account of automation,  not its periphery. The authors are right to call for greater investment in education, infrastructure, and basic services, but they need a more sophisticated account of how that is to be arranged in an era when capital is extraordinarily concentrated, its owners have power over the political process, and most show little to no interest in long-term investment in the skills and abilities of the 99%. Even the purchasing power of the vast majority of consumers is of little import to those who can live off lightly taxed capital gains.

    Second, assuming that “machines continue to race ahead” is a dodge, a refusal to name the responsible parties running the machines.  Someone is designing and purchasing algorithms and robots. Illah Reza Nourbaksh’s Robot Futures suggests another metaphor:

    Today most nonspecialists have little say in charting the role that robots will play in our lives.  We are simply watching a new version of Star Wars scripted by research and business interests in real time, except that this script will become our actual world. . . . Familiar devices will become more aware, more interactive and more proactive; and entirely new robot creatures will share our spaces, public and private, physical and digital. . . .Eventually, we will need to read what they write, we will have to interact with them to conduct our business transactions, and we will often mediate our friendships through them.  We will even compete with them in sports, at jobs, and in business. [9]

    Nourbaksh nudges us closer to the truth, focusing on the competitive angle. But the “we” he describes is also inaccurate. There is a group that will never have to “compete” with robots at jobs or in business—rentiers. Too many of them are narrowly focused on how quickly they can replace needy workers with undemanding machines.

    For the rest of us, another question concerning automation is more appropriate: how much can we be stuck with? A black-card-toting bigshot will get the white glove treatment from AmEx; the rest are shunted into automated phone trees. An algorithm determines the shifts of retail and restaurant workers, oblivious to their needs for rest, a living wage, or time with their families.  Automated security guards, police, and prison guards are on the horizon. And for many of the “expelled,” the homines sacres, automation is a matter of life and death: drone technology can keep small planes on their tracks for hours, days, months—as long as it takes to execute orders.

    B&M focus on “brilliant technologies,” rather than the brutal or bumbling instances of automation.  It is fun to imagine a souped-up Roomba making the drudgery of housecleaning a thing of the past.  But domestic robots have been around since 2000, and the median wage-earner in the U.S. does not appear to be on a fast track to a Jetsons-style life of ease.[10] They are just as likely to be targeted by the algorithms of the everyday, as they are to be helped by them. Mysterious scoring systems routinely stigmatize persons, without them even knowing. They reflect the dark side of automation—and we are in the dark about them, given the protections that trade secrecy law affords their developers.

    IV. Conclusion

    Debates about robots and the workers “struggling to keep up” with them are becoming stereotyped and stale. There is the standard economic narrative of “skill-biased technical change,” which acts more as a tautological, post hoc, retrodictive, just-so story than a coherent explanation of how wages are actually shifting. There is cyberlibertarian cornucopianism, as Google’s Ray Kurzweil and Eric Schmidt promise there is nothing to fear from an automated future. There is dystopianism, whether intended as a self-preventing prophecy, or entertainment. Each side tends to talk past the other, taking for granted assumptions and values that its putative interlocutors reject out of hand.

    Set amidst this grim field, 2MA is a clear advance. B&M are attuned to possibilities for the near and far future, and write about each in accessible and insightful ways.  The authors of The Second Machine Age claim even more for it, billing it as a guide to epochal change in our economy. But it is better understood as the kind of “big idea” book that can name a social problem, underscore its magnitude, and still dodge the elaboration of solutions controversial enough to scare off celebrity blurbers.

    One of 2MA’s blurbers, Clayton Christensen, offers a backhanded compliment that exposes the core weakness of the book. “[L]earners and teachers alike are in a perpetual mode of catching up with what is possible. [The Second Machine Age] frames a future that is genuinely exciting!” gushes Christensen, eager to fold automation into his grand theory of disruption. Such a future may be exciting for someone like Christensen, a millionaire many times over who won’t lack for food, medical care, or housing if his forays fail. But most people do not want to be in “perpetually catching up” mode. They want secure and stable employment, a roof over their heads, decent health care and schooling, and some other accoutrements of middle class life. Meaning is found outside the economic sphere.

    Automation could help stabilize and cheapen the supply of necessities, giving more persons the time and space to enjoy pursuits of their own choosing. Or it could accelerate arms races of various kinds: for money, political power, armaments, spying, stock trading. As long as purchasing power alone—whether of persons or corporations—drives the scope and pace of automation, there is little hope that the “brilliant technologies” B&M describe will reliably lighten burdens that the average person experiences. They may just as easily entrench already great divides.

    All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside. That does not simply mean creating workers with skill sets that better “plug into” the needs of machines, but also, doing the opposite: creating machines that better enhance and respect the abilities and needs of workers.  That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners.

    _____

    Frank Pasquale (@FrankPasquale) is a Professor of Law at the University of Maryland Carey School of Law. His recent book, The Black Box Society: The Secret Algorithms that Control Money and Information (Harvard University Press, 2015), develops a social theory of reputation, search, and finance.  He blogs regularly at Concurring Opinions. He has received a commission from Triple Canopy to write and present on the political economy of automation. He is a member of the Council for Big Data, Ethics, and Society, and an Affiliate Fellow of Yale Law School’s Information Society Project. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    [1] One can quibble with the idea of automation as necessarily entailing “bounty”—as Yves Smith has repeatedly demonstrated, computer systems can just as easily “crapify” a process once managed well by humans. Nor is “spread” a necessary consequence of automation; well-distributed tools could well counteract it. It is merely a predictable consequence, given current finance and business norms and laws.

    [2] For a definition of “crazy talk,” see Neil Postman, Stupid Talk, Crazy Talk: How We Defeat Ourselves by the Way We Talk and What to Do About It (Delacorte, 1976). For Postman, “stupid talk” can be corrected via facts, whereas “crazy talk” “establishes different purposes and functions than the ones we normally expect.” If we accept the premise of labor as a cost to be minimized, what better to cut than the compensation of the highest paid persons?

    [3] Conversation with Sam Frank at the Swiss Institute, Dec. 16, 2014, sponsored by Triple Canopy.

    [4] In Brynjolfsson, McAfee, and Michael Spence, “New World Order: Labor, Capital, and Ideas in the Power Law Economy,” an article promoting the book. Unfortunately, as with most statements in this vein, B&M&S give us little idea how to identify a “good idea” other than one that “reap[s] huge rewards”—a tautology all too common in economic and business writing.

    [5] Frank Pasquale, The Black Box Society (Harvard University Press, 2015).

    [6] Programs, both in the sense of particular software regimes, and the program of human and technical efforts to collect and analyze the translations that were the critical data enabling the writing of the software programs behind Google Translate.

    [9] Illah Reza Nourbaksh, Robot Futures (MIT Press, 2013), pp. xix-xx.

    [10] Erwin Prassler and Kazuhiro Kosuge, “Domestic Robotics,” in Bruno Siciliano and Oussama Khatib, eds., Springer Handbook of Robotics (Springer, 2008), p. 1258.