boundary 2

Tag: surveillance capitalism

  • Richard Hill —  The Curse of Concentration (Review of Cory Doctorow, How to Destroy Surveillance Capitalism)

    Richard Hill — The Curse of Concentration (Review of Cory Doctorow, How to Destroy Surveillance Capitalism)

    a review of Cory Doctorow, How to Destroy Surveillance Capitalism (OneZero, 2021)

    by Richard Hill

    ~

    This short online (free access) book provides a highly readable, inspiring, and powerful complement to Shoshana Zuboff’s The Age of Surveillance Capitalism (which the author qualifies and to some extent criticizes) and Timothy Wu’s The Curse of Bigness. It could be sub-titled (paraphrasing Maistre) “every nation gets the economic system it deserves,” in this case a symbiosis of corporate surveillance and state surveillance, in an economy dominated by, and potentially controlled by, a handful of companies. As documented elsewhere, that symbiosis is not an accident or coincidence. As the author puts the matter: “We need to take down Big Tech, and to do that, we need to start by correctly identifying the problem.”

    What follows is my analysis of the ideas of the book: it does not follow the order in which the ideas are presented in the book. In a nutshell, the author describes the source of the problem: an advertising-based revenue model that requires ever-increasing amounts of data, and thus ever-increasing concentration, coupled with weak anti-trust enforcement, and, worse, government actions that deliberately or inadvertently favor the power of dominant companies. The author describes (as have others) the negative effects this has had for privacy (which, as the author says, “is necessary for human progress”) and democracy; and proposes some solutions: strong antitrust, but also a relatively new idea – imposed interoperability. I will summarize these themes in the order given above.

    However, I will first summarize four important observations that underpin the issues outlined above. The first is that the Internet (and information and communications technologies (ICT) in general) is everything. As the author puts it: “The upshot of this is that our best hope of solving the big coordination problems — climate change, inequality, etc. — is with free, fair, and open tech.”

    The second is that data and information are increasingly important (see for example the Annex of this submission), and don’t fit well into existing private property regimes (see also here and here). And this in particular because of the way it is currently applied: “Big Tech has a funny relationship with information. When you’re generating information — anything from the location data streaming off your mobile device to the private messages you send to friends on a social network — it claims the rights to make unlimited use of that data. But when you have the audacity to turn the tables — to use a tool that blocks ads or slurps your waiting updates out of a social network and puts them in another app that lets you set your own priorities and suggestions or crawls their system to allow you to start a rival business — they claim that you’re stealing from them.”

    The third is that the time has come to reject the notion that ICTs, the Internet, and the companies that dominate those industries (“Big Tech”) are somehow different from everything else and should be treated differently: “I think tech is just another industry, albeit one that grew up in the absence of real monopoly constraints. It may have been first, but it isn’t the worst nor will it be the last.”

    The fourth is that network effects favor concentration: “A decentralization movement has tried to erode the dominance of Facebook and other Big Tech companies by fielding ‘indieweb’ alternatives – Mastodon as a Twitter alternative, Diaspora as a Facebook alternative, etc. – but these efforts have failed to attain any kind of liftoff. Fundamentally, each of these services is hamstrung by the same problem: every potential user for a Facebook or Twitter alternative has to convince all their friends to follow them to a decentralized web alternative in order to continue to realize the benefit of social media. For many of us, the only reason to have a Facebook account is that our friends have Facebook accounts, and the reason they have Facebook accounts is that we have Facebook accounts.”

    Turning to the main ideas of the book, the first is that the current business model is based on advertising: “ad-driven Big Tech’s customers are advertisers, and what companies like Google and Facebook sell is their ability to convince you to buy stuff. Big Tech’s product is persuasion. The services — social media, search engines, maps, messaging, and more — are delivery systems for persuasion. Rather than finding ways to bypass our rational faculties, surveillance capitalists like Mark Zuckerberg mostly do one or more of three things: segment the market, attempt to deceive it, and exploit dominant positions.”

    Regarding segmentation, the author states: “Facebook is tops for segmenting.” However, despite the fine targeting, its ads don’t always work: “The solution to Facebook’s ads only working one in a thousand times is for the company to try to increase how much time you spend on Facebook by a factor of a thousand. Rather than thinking of Facebook as a company that has figured out how to show you exactly the right ad in exactly the right way to get you to do what its advertisers want, think of it as a company that has figured out how to make you slog through an endless torrent of arguments even though they make you miserable, spending so much time on the site that it eventually shows you at least one ad that you respond to.”

    Thus it practices a form of deception: “So Facebook has to gin up traffic by sidetracking its own forums: every time Facebook’s algorithm injects controversial materials – inflammatory political articles, conspiracy theories, outrage stories – into a group, it can hijack that group’s nominal purpose with its desultory discussions and supercharge those discussions by turning them into bitter, unproductive arguments that drag on and on. Facebook is optimized for engagement, not happiness, and it turns out that automated systems are pretty good at figuring out things that people will get angry about.”

    The author describes how the current level of concentration is not due only to network effects and market forces. But also to “tactics that would have been prohibited under classical, pre-Ronald-Reagan antitrust enforcement standards.”

    This is compounded by the current copyright regime: “If our concern is that markets cease to function when consumers can no longer make choices, then copyright locks should concern us at least as much as influence campaigns. An influence campaign might nudge you to buy a certain brand of phone; but the copyright locks on that phone absolutely determine where you get it serviced, which apps can run on it, and when you have to throw it away rather than fixing it. Copyright locks are a double whammy: they create bad security decisions that can’t be freely investigated or discussed.”

    And it is due to inadequate government intervention: “Only the most extreme market ideologues think that markets can self-regulate without state oversight. Markets need watchdogs – regulators, lawmakers, and other elements of democratic control – to keep them honest. When these watchdogs sleep on the job, then markets cease to aggregate consumer choices because those choices are constrained by illegitimate and deceptive activities that companies are able to get away with because no one is holding them to account. Many of the harms of surveillance capitalism are the result of weak or nonexistent regulation. Those regulatory vacuums spring from the power of monopolists to resist stronger regulation and to tailor what regulation exists to permit their existing businesses.”

    For example as the author documents, the penalties for leaking data are negligible, and “even the most ambitious privacy rules, such as the EU General Data Protection Regulation, fall far short of capturing the negative externalities of the platforms’ negligent over-collection and over-retention, and what penalties they do provide are not aggressively pursued by regulators.”

    Yet we know that data will leak and can be used for identity theft with major consequences: “For example, attackers can use leaked username and password combinations to hijack whole fleets of commercial vehicles that have been fitted with anti-theft GPS trackers and immobilizers or to hijack baby monitors in order to terrorize toddlers with the audio tracks from pornography. Attackers use leaked data to trick phone companies into giving them your phone number, then they intercept SMS-based two-factor authentication codes in order to take over your email, bank account, and/or cryptocurrency wallets.”

    But we should know what to do: “Antitrust is a market society’s steering wheel, the control of first resort to keep would-be masters of the universe in their lanes. But Bork and his cohort ripped out our steering wheel 40 years ago. The car is still barreling along, and so we’re yanking as hard as we can on all the other controls in the car as well as desperately flapping the doors and rolling the windows up and down in the hopes that one of these other controls can be repurposed to let us choose where we’re heading before we careen off a cliff. It’s like a 1960s science-fiction plot come to life: people stuck in a ‘generation ship,’ plying its way across the stars, a ship once piloted by their ancestors; and now, after a great cataclysm, the ship’s crew have forgotten that they’re in a ship at all and no longer remember where the control room is. Adrift, the ship is racing toward its extinction, and unless we can seize the controls and execute emergency course correction, we’re all headed for a fiery death in the heart of a sun.”

    We know why nobody is in the control room: “The reason the world’s governments have been slow to create meaningful penalties for privacy breaches is that Big Tech’s concentration produces huge profits that can be used to lobby against those penalties – and Big Tech’s concentration means that the companies involved are able to arrive at a unified negotiating position that supercharges the lobbying.” Regarding lobbying, see for example here and here.

    But it’s worse than lack of control: not only have governments failed to enforce antitrust laws, they have actively favored mass collection of data, for their own purposes: “Any hard limits on surveillance capitalism would hamstring the state’s own surveillance capability. … At least some of the states’ unwillingness to take meaningful action to curb surveillance should be attributed to this symbiotic relationship. There is no mass state surveillance without mass commercial surveillance. … Monopolism is key to the project of mass state surveillance. … A concentrated tech sector that works with authorities is a much more powerful ally in the project of mass state surveillance than a fragmented one composed of smaller actors.” The author documents how this is the case for Amazon’s Ring.

    As the author says: “This mass surveillance project has been largely useless for fighting terrorism: the NSA can only point to a single minor success story in which it used its data collection program to foil an attempt by a U.S. resident to wire a few thousand dollars to an overseas terror group. It’s ineffective for much the same reason that commercial surveillance projects are largely ineffective at targeting advertising: The people who want to commit acts of terror, like people who want to buy a refrigerator, are extremely rare. If you’re trying to detect a phenomenon whose base rate is one in a million with an instrument whose accuracy is only 99%, then every true positive will come at the cost of 9,999 false positives.”

    And the story gets worse and worse: “In the absence of a competitive market, lawmakers have resorted to assigning expensive, state-like duties to Big Tech firms, such as automatically filtering user contributions for copyright infringement or terrorist and extremist content or detecting and preventing harassment in real time or controlling access to sexual material. These measures put a floor under how small we can make Big Tech because only the very largest companies can afford the humans and automated filters needed to perform these duties. But that’s not the only way in which making platforms responsible for policing their users undermines competition. A platform that is expected to police its users’ conduct must prevent many vital adversarial interoperability techniques lest these subvert its policing measures.”

    So we get into a vicious circle: “To the extent that we are willing to let Big Tech police itself – rather than making Big Tech small enough that users can leave bad platforms for better ones and small enough that a regulation that simply puts a platform out of business will not destroy billions of users’ access to their communities and data – we build the case that Big Tech should be able to block its competitors and make it easier for Big Tech to demand legal enforcement tools to ban and punish attempts at adversarial interoperability.”

    And into a long-term conundrum: “Much of what we’re doing to tame Big Tech instead of breaking up the big companies also forecloses on the possibility of breaking them up later. Yet governments confronting all of these problems all inevitably converge on the same solution: deputize the Big Tech giants to police their users and render them liable for their users’ bad actions. The drive to force Big Tech to use automated filters to block everything from copyright infringement to sex-trafficking to violent extremism means that tech companies will have to allocate hundreds of millions to run these compliance systems.” Such rules “are not just death warrants for small, upstart competitors that might challenge Big Tech’s dominance but who lack the deep pockets of established incumbents to pay for all these automated systems. Worse still, these rules put a floor under how small we can hope to make Big Tech.”

    The author documents how the curse of concentration is not restricted to ICTs and the Internet. For example: “the degradation of news products long precedes the advent of ad-supported online news. Long before newspapers were online, lax antitrust enforcement had opened the door for unprecedented waves of consolidation and roll-ups in newsrooms.” However, as others have documented in detail, the current Internet advertising model has weakened conventional media, with negative effects for democracy.

    Given the author’s focus on weak antitrust enforcement as the root of the problems, it’s not surprising that he sees antitrust as a solution: “Today, we’re at a crossroads where we’re trying to figure out if we want to fix the Big Tech companies that dominate our internet or if we want to fix the internet itself by unshackling it from Big Tech’s stranglehold. We can’t do both, so we have to choose. If we’re going to break Big Tech’s death grip on our digital lives, we’re going to have to fight monopolies. I believe we are on the verge of a new “ecology” moment dedicated to combating monopolies. After all, tech isn’t the only concentrated industry nor is it even the most concentrated of industries. You can find partisans for trustbusting in every sector of the economy. … First we take Facebook, then we take AT&T/WarnerMedia.”

    It may be hard to break up big tech, but it’s worth starting to work on it: “Getting people to care about monopolies will take technological interventions that help them to see what a world free from Big Tech might look like. … Getting people to care about monopolies will take technological interventions that help them to see what a world free from Big Tech might look like.”

    In particular, the author stresses a relatively new idea: adversarial compatibility, that is, forced interoperability: “adversarial compatibility reverses the competitive advantage: If you were allowed to compete with Facebook by providing a tool that imported all your users’ waiting Facebook messages into an environment that competed on lines that Facebook couldn’t cross, like eliminating surveillance and ads, then Facebook would be at a huge disadvantage. It would have assembled all possible ex-Facebook users into a single, easy-to-find service; it would have educated them on how a Facebook-like service worked and what its potential benefits were; and it would have provided an easy means for disgruntled Facebook users to tell their friends where they might expect better treatment. Adversarial interoperability was once the norm and a key contributor to the dynamic, vibrant tech scene, but now it is stuck behind a thicket of laws and regulations that add legal risks to the tried-and-true tactics of adversarial interoperability. New rules and new interpretations of existing rules mean that a would-be adversarial interoperator needs to steer clear of claims under copyright, terms of service, trade secrecy, tortious interference, and patent.”

    In conclusion: “Ultimately, we can try to fix Big Tech by making it responsible for bad acts by its users, or we can try to fix the internet by cutting Big Tech down to size. But we can’t do both. To replace today’s giant products with pluralistic protocols, we need to clear the legal thicket that prevents adversarial interoperability so that tomorrow’s nimble, personal, small-scale products can federate themselves with giants like Facebook, allowing the users who’ve left to continue to communicate with users who haven’t left yet, reaching tendrils over Facebook’s garden wall that Facebook’s trapped users can use to scale the walls and escape to the global, open web.”

    In this context, it is important to stress the counter-productive effects of e-commerce proposals being negotiated, in secret, in trade negotiations (see also here and here). The author does not mention them, perhaps because they are sufficiently secret that he is not aware of them.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Sareeta Amrute — Sounding the Flat Alarm (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Sareeta Amrute

    Shoshana Zuboff’s The Age of Surveillance Capitalism begins badly: the author’s house burns down. Her home is struck by lightning, it takes Zuboff a few minutes to realize the enormity of the conflagration happening all around her and escape. The book, written after the fire goes out, is a warning about the enormity of the changes kindled while we slept. Zuboff describes a world in which autonomy, agency, and privacy–the walls of her house–are under threat by a corporate apparatus that records everything in order to control behavior. That act of monitoring and recording inaugurates a new era in the development of capitalism that Zuboff believes is destructive of both individual liberty and democratic institutions.

    Surveillance Capitalism  is the alarm to all of us to get out of the house, lest it burn down all around us. In making this warning however, Zuboff discounts the long history of surveillance outside the middle class enclaves of Europe and the United States and assumes that protecting the privacy of individuals in that same location will solve the problem of surveillance for the Rest.

    The house functions as a metaphor throughout the book, first as a warning about how difficult it is to recognize a radical remaking of our world as it is happening: this change is akin to a lightning strike. The second is as an indicator of the kind of world we inhabit: it is a world that could be enhancing of life, instead it treats life as a resource to be extracted. The third uses the idea of house as protection to solve the other two problems.

    Zuboff contrasts an early moment of the digitally connected world, an internet of things that was on a closed circuit within one house, to the current moment, where the same devices are wired to the companies that make them. For Zuboff, that difference demonstrates the exponential changes that happened in between the early promise of the internet and its current malformation. Surveillance Capital argues that from the connective potential of the early Internet has come the current dystopian state of affairs, where human behavior is monitored by companies in order to nudge that behavior toward predetermined ends. In this way, Surveillance Capitalism reverses an earlier moment of connectivity boosterism, exemplified by the title of Thomas Friedman’s popular 2005 book, The World is Flat, which celebrated technologically-produced globalization.[1] The decades from the mid to late 2000s witnessed a significant critique of the flat world hypothesis, which could be summed up as an argument for both the vast unevenness of the world, and for the continuous remaking of global tropes into local and varied meanings. Yet, here we are again it seems in 2020, except instead of celebrating flatness, we are sounding the flat alarm.

    The book’s very dimensions–it is a doorstop, on purpose–act as an inoculation against the thinness and flatness Zuboff diagnoses as predominant features of our world. Zuboff argues that these features are unprecedented, that they mark an extreme deviation from capitalism as it has been. They therefore require both a new name and new analytic tools. The name
    “surveillance capitalism” describes information-gathering enterprises that are unprecedented in human history, and that information, Zuboff writes, is used to predict “our futures for the sake of others’ gain, not ours” (11). As tech companies increasingly use our data to steer behavior towards products and advertising, our ability to experience a deep interiority where we can exercise autonomous choice shrinks. Importantly for Zuboff, these companies collect not just data willingly giving, but the data exhaust that we often unknowingly and unintentionally emit as we move through a world mediated by our devices. Behavioral nudges mark for Zuboff the ultimate endpoint for a capitalism gone awry, a capitalism drives humans to abandon free will in favor of being governed by corporations that use aggregate data about individual interactions to determine future human action.

    Zuboff’s flat alarm usefully takes the reader through the philosophical underpinnings of behaviorism, following the work of B.F. Skinner, a psychologist working at Harvard in the mid-twentieth century who believed adjusting human behavior was a matter of changing external environments through positive and negative stimuli, or reinforcements. Zuboff argues that behaviorist attitudes toward the world, considered outré in their time, have moved to the heart of Silicon Valley philosophies of disruption, where they meet a particular kind of mode of capital accumulation driven by the logics of venture, neutrality, and macho meritocracies. The result is a kind of ideology of tools and of making humans into tools, that Zuboff terms instrumentarianism, at once driven to produce companies that are profitable for venture capitalists and investors and to treat human beings as sources of data to be turned toward profitability. Widespread surveillance is a necessary feature of this new world order because it is through that observation of every detail of human life that these companies can amass the data they need to turn a profit by predicting and ultimately controlling, or tuning, human behavior.

    Zuboff identifies key figures in the development of surveillance capitalism, including the aforementioned Skinner. Her particular mode of critique tends to focus on CEOs, and Zuboff reads their pronouncements as signs of the legacy of behaviorism in the C-Suites of contemporary firms. Zuboff also spends several chapters situating the critics of these surveillance capitalists as those who need to raise the flat world alarm. She compares this need to both her personal experience with the house fire and the experience of thinkers such as Hanah Arendt writing on totalitarianism. Here, she draws an explicit critique that conjoins totalitarianism and surveillance capital. Zuboff argues that just as totalitarianism was unthinkable as it was unfolding, so too does surveillance capitalism seem an impossible future given how we like to think about human behavior and its governance. Zuboff’s argument here is highly persuasive, since she is suggesting that the critics will always come to realize what it is they are critiquing just before it is too late to do anything about it. She also argues that behaviorism is in some sense the inverse of state-governed totalitarianism, since while totalitarianism attempted to discipline humans from the inside out, surveillance capitalism is agnostic when it comes to interiority–it only deals in and tries to engineer surface effects. For all this ‘neutrality’ over and against belief, it is equally oppressive, because it aims at social domination.

    Previous reviews have provided an overview of the chapters in this book; I will not repeat the exercise, except to say that the introduction nicely lays out her overall argument and could be used effectively to broach the topic of surveillance for many audiences. The chapters outlining B.F. Skinner’s imprint on behaviorist ideologies are also useful to provide historical context to the current age, as is the general story of Google’s turn toward profitability as told in Part I. And, yet, the promise of these earlier chapters–particularly the nice turn of phrase, the “‘behavioral means of production” yield in the latter chapters to an impoverished account of our options and of the contradictions at work within tech companies. These lacunae are due at least in part to Zuboff’s choice of revolutionary subject–the middle class consumer.

    Toward the end of Surveillance Capitalism, Zuboff rebuilds her house, this time with thicker walls. She uses her house’s regeneration to argue for a philosophical concept she calls the “right to sanctuary,” based largely on the writings of Gaston Bachelard, whose Poetics of Space describes for Zuboff how the shelter of home shapes “many of our most fundamental ways of making sense of experience” (477). Zuboff believes that surveillance capitalists want to bring down all these walls, for the sake of opening up our every action to collection and our every impulse to guidance from above. One might pause here and wonder whether the breaking down of walls is not fundamental to capitalism from the beginning, rather than an aberration of the current age. In other words, does the age of surveillance mark such a radical break from the general thrust of capital’s need to open up new markets and exploit new raw materials? Or, more to the point, for whom does it signify a radical aberration?  Posing this question would bring into focus the need to interrogate the complicitness of the very categories of autonomy, agency, and privacy in the extension of capitalism across geographies, and to historicize the production of interiority within that same frame.

    Against the contemporary tendency toward effacing the interior life of families and individuals, Zuboff offers sanctuary as the right to protection from surveillance. In this moment, that protection needs thick walls. For Zuboff, those walls need to be built by young people–one gets the sense that she is speaking across these sections to her own children and those of her children’s generation. The problem with describing sanctuary in this way is that it narrows the scope for both understanding the stakes of surveillance and recognizing where the battles for control over data will be fought.

    As a broadside, Surveillance Capitalism works through a combination of rhetoric and evidence. Zuboff hopes that a younger generation will fight the watchers for control over their own data. Yet, by addressing largely a well-off, college-educated, and young audience, Zuboff restricts the people who are being asked to take up the cause, and fails to ask the difficult question of what it would take to build a house with thicker walls for everyone.

    A persistent concern while reading this book is whether its analysis can encompass otherwheres. The populations that are most at risk under surveillance capitalism include immigrants, minorities, and workers, both within and outside the United States. The framework of data exhaust and its use to predict and govern behavior does not quite illuminate the uses of data collection to track border crossers, “predict” crime, and monitor worker movements inside warehouses. These relationships require an analysis that can get at the overlap between corporate and government surveillance, which Surveillance Capitalism studiously avoids. The book begins with an analysis of a system of exploitation based on turning data into profits, and argues that the new mode of production makes the motor of capitalism shift from products to information, a point well established by previous literature. Given this analysis, it astonishing that the last section of the book returns to a defense of individual rights, without stopping to question whether the ‘hive’ forms of organization that Zuboff finds in the logics of surveillance capital may have been a cooptation of radical kinds of social organizing arranged against a different model of exploitation. Leaderless movements like Occupy should be considered fully when describing hives, along with contemporary initiatives like tech worker cooperatives and technical alternatives like local mesh networks. The possibility that these radical forms of social organization may be subject to cooptation by the actors Zuboff describes never appears in the book. Instead, Zuboff appears to mistranslate theories of the subject that locate agency above or below the level of the individual to political acquiescence to a program of total social control. Without taking the step considering the political potential in ‘hive-like’ social organization, Zuboff’s corrective falls back on notions of individual rights and protections and is unable to imagine a new kind of collective action that moves beyond both individualism and behaviorism. This failure, for instance, skews Zuboff’s arguments toward the familiar ground of data protection as a solution rather than toward the more radical stances of refusal, which question data collection in the first place.

    Zuboff’s world is flat. It is a world in which there are Big Others that suck up an undifferentiated public’s data, Others whose objective is to mold our behavior and steal our free will. In this version of flatness, what was once described positively is now described negatively, as if we had collectively turned a rosy-colored smooth world flat black. Yet, how collective is this experience? How will it play out if the solutions we provide rely on bracketing out the question of what kinds of people and communities are afforded the chance to build thicker walls? This calls forth a deeper issue than simply that of a lack of inclusion of other voices in Zuboff’s account. After all, perhaps fixing the surveillance issue through the kinds of rights to sanctuary that Zuboff suggests would also fix the issue for those who are not usually conceived of as mainstream consumers.

    Except, historical examples ranging from Simone Browne’s explication of surveillance and slavery in Dark Matters to Achille Mbembe’s articulation of necropolitcs teach us that consumer protection is a thin filament on which to hang protection for all from overweaning surveillance apparati–corporate or otherwise. One could easily imagine a world where the privacy rights of well-heeled Americans are protected, but those of others continue to be violated. To reference one pertinent example, companies who are banking on monetizing data through a contractual relationship where individuals sell the data that they themselves own are simultaneously banking on those who need to sell their data to make money. In other words, as legal scholar Stacy-Ann Elvy notes (2017), in a personal data economy low-income consumers will be incentivized to sell their data without much concern for the conditions of sale, even while those who are well-off will have the means to avoid these incentives, resulting in the illusion of individual control and uneven access to privacy determined by degrees of socioeconomic vulnerability. These individuals will also be exposed to a greater degree of risk that their information will not stay secure.

    Simone Browne demonstrates that what we understand as surveillance was developed on and through black bodies, and that these populations of slaves and ex-slaves have developed strategies of avoiding detection, which she calls dark sousveillance. As Browne notes, “routing the study of contemporary surveillance” through the histories of “black enslavement and captivity opens up the possibility for fugitive acts of escape” even while it shows that the normative surveillance of white bodies was built on long histories of experimentations with black bodies (Browne 2015, 164). Achille Mbembe’s scholarship on necropolitics was developed through the insight that some life becomes killable, or in Jasbir Puar’s (2017) memorable phrasing, maimable, at the same time that other life is propagated. Mbembe proposes “necropolitcs” to describe “death worlds” where “death” not life, “is the space where freedom and negotiation happen” where “vast populations are subjected to conditions of life conferring on them the status of living dead” (Mbembe 2003, 40). The right to sanctuary appears to short circuit the spaces where life has already been configured as available for expropriation through perpetual wounding. Crucial to both Browne and Mbembe’s arguments is the insight that the study of the uneven harms of surveillance concomitantly surfaces the tactics of opposition and the archives of the world that provide alternative models of refuge outside the contractual property relationship evoked across the pages of Surveillance Capitalism.

    All those considered outside the ambit of individualized rights, including those in territories marked by extrajudicial measures, those deemed illegal, those perennially under threat, those who while at work are unprotected, those in unseen workplaces, and those simply unable to exercise rights to privacy due to law or circumstance, have little place in Zuboff’s analysis. One only has to think of Kashmir, and the access that people with no ties to this place will now have to building houses there, to begin to grasp the contested politics of home-building.[2] Without an acknowledgement of the limits of both the critique of surveillance capitalism and the agents of its proposed solutions, it seems this otherwise promising book will reach the usual audiences and have the usual effect of shoring up some peoples’ and places’ rights even while making the rest of the world and its populations available for experiments in data appropriation.

    _____

    Sareeta Amrute is Associate Professor of Anthropology at the University of Washington. Her scholarship focuses on contemporary capitalism and ways of working, and particularly on the ways race and class are revisited and remade in sites of new economy work, such as coding and software economies. She is the author of the book Encoding Race, Encoding Class: Indian IT Workers in Berlin (Duke University Press, 2016) and recently published the article “Of Techno-Ethics and Techno-Affects” in Feminist Review.

    Back to the essay

    _____

    Notes

    [1] Friedman (2005) attributes this phrase to Nandan Nilekani, then Co-Chair, of Indian Tech company Infosys (and subsequently Chair of the Unique Identification Authority of India).

    [2] Until 2019, Articles 370 and 35A of the Indian Constitution granted the territories of Jammu and Kashmir special status, which allowed the state to keep on it’s books laws restricting who could buy land and property in Kashmir by allowing the territories to define who counted as a permanent resident.. After the abrogation of Article 370, rumors swirled that the rich from Delhi and elsewhere would now be able to purchase holiday homes in the area. See e.g. Devansh Sharma, “All You Need to Know about Buying Property in Jammu and Kashmir“; Parvaiz Bukhari, “Myth No 1 about Article 370: It Prevents Indians from Buying Land in Kashmir.”

    _____

    Works Cited

    • Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke University Press.
    • Elvy, Stacy-Ann. 2017. “Paying for Privacy and the Personal Data Economy.” Columbia Law Review 117:6 (Oct). 1369-1460.
    • Friedman, Thomas. 2005. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus and Giroux.
    • Mbembe, Achille. 2003. “Necropolitics.” Public Culture 15:1 (Winter). 11-40.
    • Mbembe, Achille. 2019. Necropolitics. Durham, NC: Duke University Press.
    • Puar, Jasbir K. 2017. The Right to Maim: Debility, Capacity, Disability. Durham, NC: Duke University Press.

     

  • Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    Audrey Watters — Education Technology and The Age of Surveillance Capitalism (Review of Shoshana Zuboff, The Age of Surveillance Capitalism)

    a review of Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019)

    by Audrey Watters

    ~

    The future of education is technological. Necessarily so.

    Or that’s what the proponents of ed-tech would want you to believe. In order to prepare students for the future, the practices of teaching and learning – indeed the whole notion of “school” – must embrace tech-centered courseware and curriculum. Education must adopt not only the products but the values of the high tech industry. It must conform to the demands for efficiency, speed, scale.

    To resist technology, therefore, is to undermine students’ opportunities. To resist technology is to deny students’ their future.

    Or so the story goes.

    Shoshana Zuboff weaves a very different tale in her book The Age of Surveillance Capitalism. Its subtitle, The Fight for a Human Future at the New Frontier of Power, underscores her argument that the acquiescence to new digital technologies is detrimental to our futures. These technologies foreclose rather than foster future possibilities.

    And that sure seems plausible, what with our social media profiles being scrutinized to adjudicate our immigration status, our fitness trackers being monitored to determine our insurance rates, our reading and viewing habits being manipulated by black-box algorithms, our devices listening in and nudging us as the world seems to totter towards totalitarianism.

    We have known for some time now that tech companies extract massive amounts of data from us in order to run (and ostensibly improve) their services. But increasingly, Zuboff contends, these companies are now using our data for much more than that: to shape and modify and predict our behavior – “‘treatments’ or ‘data pellets’ that select good behaviors,” as one ed-tech executive described it to Zuboff. She calls this “behavioral surplus,” a concept that is fundamental to surveillance capitalism, which she argues is a new form of political, economic, and social power that has emerged from the “internet of everything.”

    Zuboff draws in part on the work of B. F. Skinner to make her case – his work on behavioral modification of animals, obviously, but also his larger theories about behavioral and social engineering, best articulated perhaps in his novel Walden Two and in his most controversial book Beyond Freedom and Dignity. By shaping our behaviors – through nudges and rewards “data pellets” and the like – technologies circumscribe our ability to make decisions. They impede our “right to the future tense,” Zuboff contends.

    Google and Facebook are paradigmatic here, and Zuboff argues that the former was instrumental in discovering the value of behavioral surplus when it began, circa 2003, using user data to fine-tune ad targeting and to make predictions about which ads users would click on. More clicks, of course, led to more revenue, and behavioral surplus became a new and dominant business model, at first for digital advertisers like Google and Facebook but shortly thereafter for all sorts of companies in all sorts of industries.

    And that includes ed-tech, of course – most obviously in predictive analytics software that promises to identify struggling students (such as Civitas Learning) and in behavior management software that’s aimed at fostering “a positive school culture” (like ClassDojo).

    Google and Facebook, whose executives are clearly the villains of Zuboff’s book, have keen interests in the education market too. The former is much more overt, no doubt, with its Google Suite product offerings and its ubiquitous corporate evangelism. But the latter shouldn’t be ignored, even if it’s seen as simply a consumer-facing product. Mark Zuckerberg is an active education technology investor; Facebook has “learning communities” called Facebook Education; and the company’s engineers helped to build the personalized learning platform for the charter school chain Summit Schools. The kinds of data extraction and behavioral modification that Zuboff identifies as central to surveillance capitalism are part of Google and Facebook’s education efforts, even if laws like COPPA prevent these firms from monetizing the products directly through advertising.

    Despite these companies’ influence in education, despite Zuboff’s reliance on B. F. Skinner’s behaviorist theories, and despite her insistence that surveillance capitalists are poised to dominate the future of work – not as a division of labor but as a division of learning – Zuboff has nothing much to say about how education technologies specifically might operate as a key lever in this new form of social and political power that she has identified. (The quotation above from the “data pellet” fellow notwithstanding.)

    Of course, I never expect people to write about ed-tech, despite the importance of the field historically to the development of computing and Internet technologies or the theories underpinning them. (B. F. Skinner is certainly a case in point.) Intertwined with the notion that “the future of education is necessarily technological” is the idea that the past and present of education are utterly pre-industrial, and that digital technologies must be used to reshape education (and education technologies) – this rather than recognizing the long, long history of education technologies and the ways in which these have shaped what today’s digital technologies generally have become.

    As Zuboff relates the history of surveillance capitalism, she contends that it constitutes a break from previous forms of capitalism (forms that Zuboff seems to suggest were actually quite benign). I don’t buy it. She claims she can pinpoint this break to a specific moment and a particular set of actors, positing that the origin of this new system was Google’s development of AdSense. She does describe a number of other factors at play in the early 2000s that led to the rise of surveillance capitalism: notably, a post–9/11 climate in which the US government was willing to overlook growing privacy concerns about digital technologies and to use them instead to surveil the population in order to predict and prevent terrorism. And there are other threads she traces as well: neoliberalism and the pressures to privatize public institutions and deregulate private ones; individualization and the demands (socially and economically) of consumerism; and behaviorism and Skinner’s theories of operant conditioning and social engineering. While Zuboff does talk at length about how we got here, the “here” of surveillance capitalism, she argues, is a radically new place with new markets and new socioeconomic arrangements:

    the competitive dynamics of these new markets drive surveillance capitalists to acquire ever-more-predictive sources of behavioral surplus: our voices, personalities, and emotions. Eventually, surveillance capitalists discovered that the most-predictive behavioral data come from intervening in the state of play in order to nudge, coax, tune, and herd behavior toward profitable outcomes. Competitive pressures produced this shift, in which automated machine processes not only know our behavior but also shape our behavior at scale. With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us. In this phase of surveillance capitalism’s evolution, the means of production are subordinated to an increasingly complex and comprehensive ‘means of behavioral modification.’ In this way, surveillance capitalism births a new species of power that I call instrumentarianism. Instrumentarian power knows and shapes human behavior toward others’ ends. Instead of armaments and armies, it works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices, things, and spaces.

    As this passage indicates, Zuboff believes (but never states outright) that a Marxist analysis of capitalism is no longer sufficient. And this is incredibly important as it means, for example, that her framework does not address how labor has changed under surveillance capitalism. Because even with the centrality of data extraction and analysis to this new system, there is still work. There are still workers. There is still class and plenty of room for an analysis of class, digital work, and high tech consumerism. Labor – digital or otherwise – remains in conflict with capital. The Age of Surveillance Capitalism as Evgeny Morozov’s lengthy review in The Baffler puts it, might succeed as “a warning against ‘surveillance dataism,’” but largely fails as a theory of capitalism.

    Yet the book, while ignoring education technology, might be at its most useful in helping further a criticism of education technology in just those terms: as surveillance technologies, relying on data extraction and behavior modification. (That’s not to say that education technology criticism shouldn’t develop a much more rigorous analysis of labor. Good grief.)

    As Zuboff points out, B. F. Skinner “imagined a pervasive ‘technology of behavior’” that would transform all of society but that, at the very least he hoped, would transform education. Today’s corporations might be better equipped to deliver technologies of behavior at scale, but this was already a big business in the 1950s and 1960s. Skinner’s ideas did not only exist in the fantasy of Walden Two. Nor did they operate solely in the psych lab. Behavioral engineering was central to the development of teaching machines; and despite the story that somehow, after Chomsky denounced Skinner in the pages of The New York Review of Books, that no one “did behaviorism” any longer, it remained integral to much of educational computing on into the 1970s and 1980s.

    And on and on and on – a more solid through line than the all-of-a-suddenness that Zuboff narrates for the birth of surveillance capitalism. Personalized learning – the kind hyped these days by Mark Zuckerberg and many others in Silicon Valley – is just the latest version of Skinner’s behavioral technology. Personalized learning relies on data extraction and analysis; it urges and rewards students and promises everyone will reach “mastery.” It gives the illusion of freedom and autonomy perhaps – at least in its name; but personalized learning is fundamentally about conditioning and control.

    “I suggest that we now face the moment in history,” Zuboff writes, “when the elemental right to the future tense is endangered by a panvasive digital architecture of behavior modification owned and operated by surveillance capital, necessitated by its economic imperatives, and driven by its laws of motion, all for the sake of its guaranteed outcomes.” I’m not so sure that surveillance capitalists are assured of guaranteed outcomes. The manipulation of platforms like Google and Facebook by white supremacists demonstrates that it’s not just the tech companies who are wielding this architecture to their own ends.

    Nevertheless, those who work in and work with education technology need to confront and resist this architecture – the “surveillance dataism,” to borrow Morozov’s phrase – even if (especially if) the outcomes promised are purportedly “for the good of the student.”

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines, forthcoming from The MIT Press. She maintains the widely-read Hack Education blog, on which earlier version of this piece first appeared. and writes frequently for The b2o Review Digital Studies section on digital technology and education.

    Back to the essay