b2o: boundary 2 online

Category: Digital Studies

Reviews and analysis of scholarly books about digital technology and culture, as well as of articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms, offered from a humanist perspective, in which our primary intellectual commitment is to the deeply embedded texts, figures, themes, and politics that constitute human culture, regardless of the medium in which they occur.

  • David Tomkins – Assuming Control: Spielberg Rewires Ready Player One

    David Tomkins – Assuming Control: Spielberg Rewires Ready Player One

    by David Tomkins

    I.

    Ernest Cline’s bestselling novel Ready Player One (2011) envisions a future ravaged by war, climate change, famine, and disease in which most lived experience takes place in an immense multi-player virtual reality game called the OASIS. Created by James Halliday, an emotionally stunted recluse obsessed with 1980s pop culture, the OASIS promises relief from real world suffering by way of a computer-generated alternative reality overflowing with ‘80s pop culture references. Cline’s novel follows teenager Wade Watts on an adventure to locate the digital “Easter egg” that Halliday buried deep within the OASIS shortly before his death. Those seeking the egg must use three hidden keys (made of copper, jade, and crystal, respectively) to open secret gates wherein players face challenges ranging from expertly playing the arcade game Joust to flawlessly reenacting scenes from Monty Python and the Holy Grail. The first person to find the egg will inherit Halliday’s fortune, gain controlling stock in his company Gregarious Solutions, and assume control of Halliday’s virtual homage to the ‘80s, the OASIS.

    Rich in the ‘80s nostalgia saturating popular entertainment in recent years, and with a particular reverence for Steven Spielberg’s ‘80s corpus, Cline’s novel attracted legions of readers upon publication and became an instant best seller.[1] The signing of Spielberg to direct and produce the film version of Ready Player One underscored the treatment of Spielberg’s films in the novel as quasi-sacred texts, and generated a kind of closed feedback loop between textual and visual object.[2] Shortly before the film went into production, Cline told Syfy.com that it was “still hard for [him] … to wrap [his] … head around Steven Spielberg directing this movie,” in part, because the director showed himself to be such a huge fan of Cline’s novel, arriving at pre-production meetings with a paperback copy of Ready Player One containing dozens of notes regarding moments he wanted to include in the film.[3]

    But none of these moments, it turns out, included references to Spielberg’s earlier films. In fact, Spielberg made it a point to remove such references from the story. In 2016, Spielberg told Collider.com that he decided to make Ready Player One because it “brought [him] back to the ‘80s” and let him “do anything [he wanted] … except for with [his] own movies.”[4] “Except for the DeLorean and a couple of other things that I had something to do with,” [5] Spielberg added, “I cut a lot of my own references out [of the film].”[6] One can read Spielberg’s decision simply as an attempt to avoid self-flattery—a view Spielberg appears keen to popularize in interviews.[7] But equally compelling is the idea that Spielberg felt at odds with the version of himself celebrated in Cline’s novel, that of the marketable and broadly appealing director of block-busters like Jaws, E.T., and Raiders of the Lost Ark—in other words, the Spielberg of the ‘80s. Over the last twenty-five years, Spielberg has largely moved away from pulp genres toward a nominally more “serious,” socially conscious direction as a filmmaker (recent family-friendly films such as The BFG and The Adventures of Tintin notwithstanding). Ready Player One, however, a science fiction movie about teenage underdogs coming of age, sits comfortably among the films of Spielberg’s early canon—the deeply sentimental, widely appealing family-oriented films generally understood to have shaped the landscape of contemporary Hollywood.

    The tension between early and late Spielberg in Ready Player One is among the driving forces shaping the director’s adaption of Cline’s novel. By removing most references to himself from the film, Spielberg not only rewrites an important aspect of the source material, he rewrites American cinematic history of the last 40 years. Jaws, Close Encounters, E.T., the Indiana Jones films—these works are in certain ways synonymous with ‘80s pop culture. And yet, in making a movie about ‘80s nostalgia, Spielberg begins by pointing this nostalgia away from its most famous and influential director. This self-effacing act, which effectively erases the Spielberg of the ‘80s from the film, and by extension from the era it commemorates, belies the humility animating Spielberg’s public comments on self-reference. Spielberg saturates Ready Player One—as Halliday does the OASIS—with a meticulously crafted self-image, and what’s more, affords himself total control over the medium wherein (and from which) that image is projected. Spielberg paradoxically rewrites popular memory as a reflection of his own preoccupations, making Ready Player One a film in which the future the audience is asked to escape into is defined by Spielberg’s rewriting of the cinematic past.

    Central to Spielberg’s project of recasting ‘80s nostalgia in Ready Player One is an attempt to recuperate figures of corporate or governmental power—entities unlikely to have faired well in his ‘80s work. From the corrupt Mayor Vaughn in Jaws, to the pitiless government scientists in Close Encounters and E.T., to the bureaucrats who snatch Indy’s prize at the end of Raiders of the Lost Ark, figures in elite institutional positions typically pose a threat in early Spielberg. What’s more, in E.T., as well as Spielberg-produced films like The Goonies, these figures commit acts that compel young characters to take heroic, rebellious action. But in portraying Halliday as a meek, loveable nerd in Ready Player One, Spielberg introduces something new to the classic Spielbergian playbook that has implications not only for how we understand Spielberg’s ‘80s films, but also for how Ready Player One situates itself vis-à-vis contemporary pop culture. In Cline’s novel, Halliday comes across as a trickster figure in the mold of Willy Wonka—so much so that one of the first rumors to emerge about Spielberg’s adaptation of Ready Player One was that the director had coaxed Gene Wilder out of retirement to perform the role. Not only did that rumor turn out to be baseless, but the characterization of Halliday in Spielberg’s film neutralizes the faintly sinister underpinnings of Cline’s portrayal, replacing them with a goofy innocence, and an insistence—informed as much by contemporary celebrity worship as by Spielberg’s own status as an elder statesman of Hollywood—that the audience sympathize with, rather than despise, the all-powerful multi-billionaire.

    Halliday’s vast corporate empire, his incalculable wealth, the extraordinary political and cultural power he undoubtedly wields as the creator of an entertainment technology juggernaut—none of these things factor into Spielberg’s portrayal. Rather, Spielberg’s film compels us to pity Halliday, to see him as someone who has suffered, someone whose genius has denied him the kind of emotional life that we, the audience, take for granted (or that Spielberg wants us to take for granted, as the rich emotional interiority he imagines is itself a construct). Given that both Halliday (in Cline’s novel and Spielberg’s film) and Spielberg (in the real world) share global renown as authors of popular entertainment, it’s unsurprising that Spielberg would sympathize with the character. After all, the name Spielberg, whether cited in a production or directorial capacity, or as a generic descriptor (“Spielbergian”), suggests  “a mélange of mass appeal, sentiment and inchoate childlike wonder”—a description one could easily imagine applied to the OASIS.[8] But what is surprising is that Spielberg redeploys the sentimentality of his early work in Ready Player One to affirm the vertical social organization and imperialist ideology those films, at least on the surface, appear to attack.

    The truth-to-power ethos of Spielberg’s ‘80s corpus is enlisted in Ready Player One to sentimentalize the corporate overlord’s yearning to protect his product and control his legacy. Similar to how the rebel struggle against the evil empire in George Lucas’s Star Wars films ultimately reinforces another corporate empire (Lucasfilm), Spielberg’s early rebellions—which were never all that “radical” to begin with, given Spielberg’s fondness for traditional hetero-normative social structures—fold in on themselves in Ready Player One, readjusted to serve the film’s confirmation of neoliberal ideology and corporate sovereignty. What looks superficially in Ready Player One like a toning down of Spielberg and a celebration of Cline is in fact Spielberg through and through, but with the ironic upshot being the recuperation of institutional and corporate power, the affirmation of existing class structures, and a recasting of the heroic rebellions one finds in Spielberg’s early work as far more conservative.

    Unlike Spielberg’s film, Cline’s novel focuses a great deal on Halliday’s astonishing wealth, and it’s clear that for “gunters”—characters like Wade in search of Halliday’s egg—the acquisition of Halliday’s wealth is easily as important as gaining control of the OASIS. Wade, like most characters in Cline’s novel, is dirt poor: he, like millions of others, lives in a broken down mobile home stacked, along with dozens of others, hundreds of feet high. The world Cline describes is one of abject poverty: while the vast majority of people have next to nothing, Halliday, and a handful of corporate overlords like him, possess all the wealth, and wield all the power. This is not to overstate Cline’s interest in class in Ready Player One; indeed, he spends precious little time exploring the penurious world outside Halliday’s OASIS. Like his characters, it’s clear that Cline can’t wait to get back to the OASIS. But in Spielberg’s film, the at-best perfunctory acknowledgement of class dynamics seen in Cline’s novel is utterly ignored. Instead, Spielberg asks us to empathize with Halliday, maybe even to identify with him as much as—if not more than—we do with Wade.

    Rather than encouraging us to revile the corporate overlord responsible for impoverishing the world and controlling the lives of the story’s youthful heroes, Ready Player One stands out among Spielberg’s oeuvre (and recent Hollywood films generally) for the way it recasts the “innocent” teenager Spielberg marketed so effectively as an implicit bulwark against oppressive powers in the ‘80s as a figure sympathetic to the dominant, unassailable corporate forces of the future.[9] Whereas in Cline’s novel Wade suggests using his newfound wealth to “feed everyone on the planet,” and to “make the world a better place,” Spielberg glosses over Wade’s windfall entirely, focusing instead on what Wade’s acquisition of the OASIS allows him to take away from—rather than give to—the powerless masses. In effect, the wayward teenagers of Spielberg’s corpus mature into a kind of “ghost in the machine” of capital.

    The control Spielberg wishes to exert—over audiences, the film, the ‘80s—is perhaps most evident in the final moments of Ready Player One. As the film draws to a close, main character Wade speaks of disengaging from the OASIS to delight in the sensory and emotional experiences accessible only in the real world. In the novel, Cline similarly concludes with Wade revealing that “for the first time in as long as [he] could remember, [he] … had absolutely no desire to log back in to the OASIS”.[10] But in Spielberg’s hands, Wade’s newfound ambivalence about the OASIS has broader implications, as Wade, who ultimately wins control of the OASIS, sets limits on its availability, effectively forcing the tech-addled masses of 2045 to accept, as Wade now does, that “people need to spend more time in the real world.”[11]

    However, the restrictions that Wade—and by extension Spielberg—puts in place fail to do this; rather, they reveal the film’s great irony: that Spielberg asks audiences to discover an empathetic, authentic reality in the context of a simulated world that he, Spielberg, creates (and, it is implied, that he alone could create). By adding to Wade’s character a strong inclination toward hetero-normative romantic connection in the real world, and by directing Wade to downgrade public access to the OASIS so that its millions of users may find “real” love, Spielberg invites his audience to seek out and prioritize “authentic” humanity in contrast to that offered in the OASIS. But Spielberg does so by positing as authentic a simulation of human connection, which he then presents as the corrective not only to the film’s characters’ obsession with technology, but also to that of contemporary western society. In doing so, Spielberg attempts to situate himself apart from peddlers of artificiality like Halliday (with whom he nevertheless clearly identifies). But instead he succeeds, despite his lifelong preoccupation with celebrating and stirring human connections and emotions, in becoming the master generator of simulacra. Ultimately the film’s viewers find themselves absorbed into the position of the creator of the OASIS, so that the absence of specific references to Spielberg’s early films conceals a remaking of the entire world of the film in Spielberg’s image.

    II.

    In the film’s final scene, Spielberg assembles numerous sentimental cues to soften Wade’s mandate that users henceforth limit their time in the OASIS, thus making his demands appear more altruistic than draconian. As the camera pans across what appears to be Wade’s spacious new apartment (a significant step up from the cramped trailer he lived in previously), we see Wade and his recently acquired girlfriend Samantha sharing a kiss as he explains in a voice-over his plans for the OASIS, and as the ‘80s pop of Hall and Oates’s “You Make My Dreams Come True” gradually dominates the soundtrack. While neither the voice-over nor the establishment of the romantic couple are particularly common tropes among Spielberg’s endings overall, the collision of familial sentimentality and budding romance we see in Ready Player One nevertheless recalls several of Spielberg’s endings from the late ‘70s and early ‘80s in films like Close Encounters, E.T., and Indiana Jones and the Temple of Doom.[12]

    In anticipation of this highly sentimental ending, the film drastically accelerates the pace of the pair’s relationship: in the novel, they don’t meet in the real world until the last few pages, and their relationship—at least as far as Samantha is concerned—seems at best a work in progress. But Spielberg brings Wade and Samantha together in the real world halfway through the film, and makes their romantic connection a central concern. In doing so, and in explicitly depicting them in the final shot as a romantic couple, Spielberg creates contextual support for the argument he clearly wishes to make: that real world romance, rather than virtual game play, makes “dreams come true.” But even if this is so for Wade and Samantha, there’s little evidence to suggest that OASIS addicts around the world have had a similar experience. The suggestion, of course, is that they will—once they’re forced to.

    Not only do Wade’s new rules for the OASIS disregard the social upheaval that the narrative all but ensures would take place, they also aggressively elide the anti-social foundation upon which the OASIS was conceived. In an earlier scene, Halliday reveals that he created “the OASIS, because [he] … never felt at home in the real world,” adding that he “just didn’t know how to connect with the people there.” Whether simulations of Atari 2600 games like Adventure or of movie characters like Freddie Kruger, the contents of the OASIS are not only replicas, they’re replicas of replicas—virtual manifestations of Halliday’s adolescent obsessions placed in a world of his own making, and for his own pleasure. One “wins” in the OASIS by collecting virtual inventories of Halliday’s replicas, and gains social significance—in and outside the OASIS—according to what (and how much) one has collected.

    Despite cautioning Wade to avoid “getting lost” in the OASIS and revealing that, for him, the real world is “still the only place where you can get a decent meal,”[13] Halliday stops short of amending the central function of the OASIS as a replacement of, rather than supplement to, human interaction in the real world. Meanwhile, Wade’s parting words in the film limiting access to the OASIS point spectators toward an artificial reality of Spielberg’s making that is deeply invested in hiding its own artifice, and that punctuates a series of rewritings that remove Spielberg references from the film while simultaneously saturating it with his presence. At the same time, Spielberg ensures that the spectator’s sense of the ‘80s conforms to his own preoccupations, which themselves took hold in the context of the increasingly aggressive corporatization of the film industry that took place during this period. Consequently, the nostalgic universe generated in the film offers no exit from Spielberg, despite the absence of his name from the proceedings.

    The film rehearses this paradox once again in its treatment of Halliday’s end, which differs significantly from that of the novel. Arguably Spielberg attempts to secure his controlling presence—both in the film, and of cinematic history—by leaving ambiguous the fate of the OASIS’s creator. Although in both the book and the movie Halliday’s avatar Anorak appears to congratulate Wade (known as Parzival in the OASIS) on acquiring the egg, Cline describes an elaborate transfer of powers the film all but ignores. Upon taking Anorak’s hand, Wade looks down at his own avatar to discover that he now wears Anorak’s “obsidian black robes” and, according to his virtual display, possesses “a list of spells, inherent powers, and magic items that seemed to scroll on forever.”[14] Halliday, now appearing as he often did in the real world with “worn jeans and a faded Space Invaders T-shirt” comments, “Your avatar is immortal and all-powerful.”[15] Moments later Cline writes:

    Then Halliday began to disappear. He smiled and waved good-bye as his avatar slowly faded out of existence … Then he was completely gone.[16]

    Under Spielberg’s direction, this scene—and in particular Halliday’s exit—play out very differently. While we are made aware of Wade’s victory, his avatar’s appearance remains unchanged and there’s no mention of Wade gaining all-powerful immortality. And whereas Cline explicitly refers to the image of Halliday that Wade encounters as an avatar—and therefore a program presumably set to appear for the benefit of game’s victor—the film goes out of its way to establish that this image of Halliday is nothing of the sort. When Wade asks if Halliday is truly dead, the image responds in the affirmative; when asked if he’s an avatar, the image replies no, and doesn’t respond at all to Wade’s final question, “what are you?” Instead Halliday’s image, accompanied by another image of his childhood self, walks silently through a doorway to an adjacent room and closes the door.

    Rather than supplanting himself with a younger overlord and “fading out of existence” as he does in the novel, Spielberg’s Halliday remains part of the world he created, hesitant to relinquish full control. Closing the door behind him may signify an exit, but it doesn’t preclude the possibility of a return, especially given that neither he nor Wade locks the door. In place of closure, Halliday’s departure, along with his acknowledgement that he’s neither real nor simulation, suggests a more permanent arrangement whereby Halliday remains the animating essence within the OASIS. Halliday cannot “fade out of existence” in the OASIS because he effectively is the OASIS—its memory, its imagination, the means through which its simulations come to life. Whereas in the novel, Anorak’s transferal of power to Wade/Parzival suggests an acquisition of unadulterated control, the film proposes an alternative scenario in which Halliday’s creative powers are not fully transferable. In order for the OASIS to function, the film implies, Halliday must somehow remain within it as a kind of guiding force—a consciousness that animates the technological world Halliday created.

    By replacing the simulation of Halliday that Wade encounters at the end of the novel with a mysterious deity figure taking up permanent residence inside the OASIS, Spielberg betrays a level of affection for the multi-billionaire world builder reminiscent of his treatment of the John Hammond character in Jurassic Park (1993). In that film, Spielberg spares the life of the deadly park’s obscenely wealthy creator and CEO—portrayed with jolly insouciance by Richard Attenborough—despite being ripped to pieces by dinosaurs in Michael Crichton’s novel of the same name. In Ready Player One Spielberg ups the ante, allowing the world builder and corporate overlord to ascend to godly status, therefore ensuring that while the OASIS exists, so will its creator Halliday.

    III.

    In contrast to the clear-cut usurpation of the eccentric billionaire by the indigent but tenderhearted teenager seen in Cline’s novel, the movie version of Ready Player One asks audiences to accept a more opaque distribution of controlling interests. While on the one hand the film presents the OASIS as a site of emotional suppression wherein users—following Halliday’s example—favor artificial stimulation over real world emotional connection, it also insists viewers recognize that Halliday created the OASIS in response to real world emotional trauma. The film uses this trauma to neutralize the class distinctions between Wade and Halliday that the novel highlights, and asks spectators to view both characters through a lens of universalized emotional vulnerability. The film then uses this conception of emotional trauma to encourage spectators to sympathize and identify with the corporate billionaire, welcome his transcendence into technological deification, and accept Wade not as a usurper but as an administrator of Halliday’s corporate vision.

    But by magnifying the role social anxiety and fear of human intimacy played in creating the OASIS, the film also sets up the OASIS itself as, ultimately, a site of redemption rather than emotional suppression. Nowhere is this reworking of the OASIS more striking than during Wade’s attempt to complete Halliday’s second challenge. In a total overhaul of the novel, Wade (as Parzival) seeks clues unlocking the whereabouts of the Jade Key by visiting Halliday’s Journals, a virtual reference library located inside the OASIS. In the novel, gunters carefully study a digital text known as Anorak’s Almanac, an encyclopedia of ‘70s and ‘80s pop culture memorabilia compiled by Halliday and named after his avatar. The film replaces the almanac with a “physical” archive holding various pop culture artifacts of importance to Halliday, as well as memories of actual events in Halliday’s life. Crucially, like everything else in the OASIS, the contents of Halliday’s Journals are simulations created by Halliday based on his own memories.

    These memories appear as images carefully re-imagined for cinematic display: gunters watch Halliday’s “memory movies” via a large rectangular screen through which (or on which) the images themselves appear (or are projected) as a kind of three-dimensional hologram. Looking at the screen is to look into the environment in which the events occurred, as if looking through a wall. In the memory containing Halliday’s one and only reference to Karen Underwood—his one-time date, and the future wife of his former business partner Ogden Morrow—Halliday approaches what is essentially the “fourth wall” and, while not necessarily “breaking” it, peers knowingly into the void, signaling to gunters—and thus to spectators—that recognizing the significance of this “leap not taken” regarding his unrealized affection for Karen is crucial to completing the second challenge. Spielberg latches on to Halliday’s failure with Karen, making this missed romantic opportunity the site of significant lifelong emotional trauma, and the de facto cause of Halliday’s retreat into creating and living in the OASIS.

    Halliday’s archive also contains all of his favorite ‘80s movies, which appear as immersive environments that gunters may explore. Upon learning that Halliday, on his one and only date with Karen, took her to see Stanley Kubrick’s 1980 adaptation of Stephen King’s novel The Shining (1980), Wade (again, as Parzival) and his comrades (and the film’s audience) enter the lobby of the Overlook Hotel exactly as it appears in Kubrick’s film. The ensuing sequence is particularly revelatory in that we witness the camera gleefully roaming the interiors of Kubrick’s Overlook Hotel. Spielberg clearly delights in this scene, in the same way that Halliday, in Cline’s novel, relishes simulating the cinematic worlds of War Games and Monty Python and the Holy Grail. But in those cases, OASIS players must adopt one of the film’s characters as an avatar in order to show reverence by reciting dialogue and participating in scenes. In Cline’s novel, Halliday is interested in using reenactment to measure the depth of players’ devotion to Halliday’s favorite films.

    In Spielberg’s adaptation, however, Parzival enters Halliday’s simulation of The Shining not as part of the story, but as a spectator. In one sense, Spielberg’s Halliday opens cinema up to players, enabling them to remain “themselves” while interacting with cinematic environments to discover clues leading to the jade key and therefore victory in Halliday’s second challenge. The theory of spectatorship that the film seems to advance during this sequence insists that the real pleasure of cinema lies not in the passive watching of it, but in its imaginative regeneration and exploration. The spectator’s imagination has the ability to call up a cinematic memory, and to stage one’s own stories or scenes in the environments recalled there. To connect with a film is to hold it in one’s memory in such a way that in can be explored repeatedly, and in different ways.

    But while this conception of spectatorship appears to give viewers the ability to make cinema broadly their own, in fact, with Spielberg’s inhabiting of The Shining, we witness a specific transmutation of Kubrick’s text into an entry in Spielberg’s own corpus. In The Shining, Kubrick crushes those aspects of Stephen King’s narrative that would have importance for Spielberg, namely King’s interest in family trauma and intergenerational conflict. For Kubrick, the family is a scene of a pure violence that infects and corrodes the human capacity for empathy and rationality, thereby forcing violent action recursively back on itself. Kubrick’s film is clearly anti-Spielbergian in this sense, and yet in his replay of The Shining in Ready Player One, Spielberg does his own violence to Kubrick’s vision, taking control of the simulacrum and re-producing The Shining as a site of redemption, whereas in Kubrick it’s certainly not.

    After a series of gags that play some of Kubrick’s most haunting images—the twin sisters, the torrent of blood, the decaying women in room 237—for laughs, Wade finds himself in the ballroom of the hotel. Once there, the simulation of Kubrick’s film gives way to a new setting completely unique to Halliday’s imagination, wherein dozens of decomposing zombies dance in pairs, with a simulation of Halliday’s never-to-be love, Karen Underwood, being passed from zombie to zombie. To win the challenge, a player must figuratively make the “leap” that confounded Halliday, using small, suspended platforms, as well as zombie shoulders and heads, to make his way to Karen, whom he must then ask to dance. This challenge reveals to players, and to the audience, Halliday’s emotional vulnerability, highlighting his regret, and foreshadowing the lesson Spielberg imposes on viewers at the film’s end: namely, that audiences should see Halliday’s story as a cautionary tale warning against using technology to repress the need to connect with other human beings.

    Spielberg begins his adaption of Cline’s novel with another radical revision, substituting an action set piece—a car race—for an elaborate two-tier challenge wherein Wade must best a Dungeons and Dragons character playing the classic arcade game Joust and later recite dialogue from the ‘80s movie War Games starring Mathew Broderick. After several failed attempts, Wade discovers that in order to win the race he must travel backwards, a move clearly highlighting the film’s nostalgic turn to the ‘80s. Although this sequence features the film’s most overt reference to Spielberg’s ‘80s corpus in the form of Wade’s car, a replica of Marty McFly’s DeLorean from the Spielberg-produced film Back to the Future, more significant is the extremity of the challenge’s revision, and the fact that nothing within the film or Cline’s novel suggests that a big action spectacle with lots of fast cars might be at all in keeping with Halliday’s ‘80s pop culture preoccupations.

    More likely, given the affinity Spielberg shows throughout the film for redressing Halliday’s world in his own image, is that this sequence pays homage to Spielberg’s friend (and fellow Hollywood elder) George Lucas, whose own early corpus was defined, in part, not only by his film American Graffiti, but by his trademark directorial note, “faster and more intense”—a note this sequence in Ready Player One takes to heart. With this scene and the others mentioned previously, Ready Player One recasts “classic Spielberg” by shifting emphasis away from teenage innocents and toward corporate overlords with whom the story’s young heroes are complicit in the project of subjugation. What emerges is the supremacy and permanence of the corporate overlord whom Spielberg both identifies with and wishes to remake in his own image in such a way that the overlord’s world becomes a site for the Spielbergian values of homecoming and redemption rather than emotional repression aided by escape into simulacra. The irony being that the world of homecoming and redemption he offers is itself nothing other than cinema’s simulation.

    Bibliography

    Breznican, Anthony. “Steven Spielberg Vowed to Leave His Own Movies Out of ‘Ready Player One’—His Crew Vowed Otherwise.” Ew.com, March 22, 2018, http://ew.com/movies/2018/03/22/ready-player-one-steven-spielberg-references/.

    Cabin, Chris. “’Ready Player One’: Steven Spielberg Says He’s Avoiding References to His Own Movies.” Collider.com, June 22, 2016, http://collider.com/ready-player-one-steven-spielberg-easter-eggs/.

    Cline, Ernest. Ready Player One. Broadway Books: New York, 2011.

    Hunter, I.Q. “Spielberg and Adaptation.” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 212-226.

    Kramer, Peter. “Spielberg and Kubrick.” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 195-211.

    Nealon, Jeffrey T. Post-Postmodernism or, The Cultural Logic of Just-in-Time Capitalism. Stanford, CA: Stanford UP, 2012.

    Russell, James. “Producing the Spielberg ‘Brand.’” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 45-57.

    Spielberg, Steven, Dir. Ready Player One. 2018.

    Walker, Michael. “Steven Spielberg and the Rhetoric of an Ending.” A Companion to Steven Spielberg. Ed. Nigel Morris. Wiley Blackwell: West Sussex, 2017. 137-158.

    Watkins, Denny. “Ernest Cline Geeks Out About Spielberg Adapting ‘Ready Player One.’” Syfy.com, May 2, 2016, http://www.syfy.com/syfywire/ernest-cline-geeks-out-about-spielberg-adapting-ready-player-one.

    Vinyard, Papa, “Be Ready for ‘Ready Player One’ in December 2017.” Ain’t it Cool News, August 6, 2015, http://www.aintitcool.com/node/72613.

    Notes

    [1] From remakes (The Karate Kid, Clash of the Titans) and sequels (Tron: Legacy, Wall Street: Money Never Sleeps) to original TV shows drawing on ‘80s cultural influences (Stranger Things, The Americans), ‘80s nostalgia has been exceedingly popular for the better part of a decade. Addressing the current ubiquity of the ‘80s, Jeffery T. Nealon argues that “it’s not so much that the ‘80s are back culturally, but that they never went anywhere economically,” adding, “the economic truisms of the ‘80s remain a kind of sound track for today, the relentless beat playing behind the eye candy of our new corporate world” (Post-Postmodernism).

    [2] When it was announced that Spielberg would adapt Ready Player One, entertainment journalists rejoiced, describing the move as a “return to ‘blockbuster filmmaking’” for Spielberg that would give Cline’s story both “street cred and … mainstream appeal” (Vinyard, “Be Ready for ‘Ready Player One’”).

    [3] Watkins, “Ernest Cline Geeks Out”

    [4] Cabin, “‘Steven Spielberg Says”

    [5] In both the novel and film, Wade’s avatar, known as Parzival in the OASIS, drives a simulation of the DeLorean featured in the Back to the Future films, which Spielberg produced.

    [6] Cabin, “Steven Spielberg Says”

    [7] Spielberg remarked in 2016 that, “[he] was very happy to see there was enough without [him] that made the ‘80s a great time to grow up” (Cabin, “Spielberg Says”), and in a 2018 interview with Ew.com Spielberg insisted, “I didn’t corner the ‘80s market … there’s plenty to go around” (Breznican, “Steven Spielberg Vowed”).

    [8] Russell, “Producing the Spielberg ‘Brand.’”

    [9] While it’s true, in both the novel and the film, that prohibiting corporatist Nolan Sorrento from acquiring the OASIS is a priority for Wade, what motivates him is not antipathy to capitalist enterprise, but rather the desire to preserve the “pure” capitalist vision of the OASIS’s corporate creator, Halliday. Averse to Sorrento’s plans to further monetize the OASIS by opening up the platform to infinite numbers of advertisers, Wade simply prefers Halliday’s more controlled brand of corporatism, which appears rooted in what Nealon would describe as “the dictates of ‘80s management theory (individualism, excellence, downsizing)” (5, Post-Postmodernism). The film likewise shares an affinity for heavily centralized, individualized, and downsized corporate control.

    [10] Cline, Ready Player One, 372.

    [11] Spielberg, Ready Player One.

    [12] Walker, “Rhetoric of an Ending,” 144-145, 149-150.

    [13] Spielberg, Ready Player One

    [14] Cline, Ready Player One, 363.

    [15] Cline, Ready Player One, 363.

    [16] Cline, Ready Player One, 364.

  • Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    Zachary Loeb – All Watched Over By Machines (Review of Levine, Surveillance Valley)

    a review of Yasha Levine, Surveillance Valley: The Secret Military History of the Internet (PublicAffairs, 2018)

    by Zachary Loeb

    ~

    There is something rather precious about Google employees, and Internet users, who earnestly believe the “don’t be evil” line. Though those three words have often been taken to represent a sort of ethos, their primary function is as a steam vent – providing a useful way to allow building pressure to escape before it can become explosive. While “don’t be evil” is associated with Google, most of the giants of Silicon Valley have their own variations of this comforting ideological façade: Apple’s “think different,” Facebook’s talk of “connecting the world,” the smiles on the side of Amazon boxes. And when a revelation troubles this carefully constructed exterior – when it turns out Google is involved in building military drones, when it turns out that Amazon is making facial recognition software for the police – people react in shock and outrage. How could this company do this?!?

    What these revelations challenge is not simply the mythos surrounding particular tech companies, but the mythos surrounding the tech industry itself. After all, many people have their hopes invested in the belief that these companies are building a better brighter future, and they are naturally taken aback when they are forced to reckon with stories that reveal how these companies are building the types of high-tech dystopias that science fiction has been warning us about for decades. And in this space there are some who seem eager to allow a new myth to take root: one in which the unsettling connections between big tech firms and the military industrial complex is something new. But as Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today” (9).

    Thus, cases of Google building military drones, Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.

    Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).

    Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.

    While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side. ARPANET, the famous forerunner of the Internet, was developed to connect computer centers at a variety of prominent universities. Reliant on Interface Message Processors (IMPs) the system routed messages through the network through a variety of nodes and in the case that one node went down the system would reroute the message through other nodes – it was a system of relaying information built to withstand a nuclear war.

    Though all manner of utopian myths surround the early Internet, and by extension its forerunner, Levine highlights that “surveillance was baked in from the very beginning” (75). Case in point, the largely forgotten CONUS Intel program that gathered information on millions of Americans. By encoding this information on IBM punch cards, which were then fed into a computer, law enforcement groups and the army were able to access information not only regarding criminal activity, but activities protected by the first amendment. As news of these databases reached the public they generated fears of a high-tech surveillance society, leading some Senators, such as Sam Ervin, to push back against the program. And in a foreshadowing of further things to come, “the army promised to destroy the surveillance files, but the Senate could not obtain definitive proof that the files were ever fully expunged,” (87). Though there were concerns about the surveillance potential of ARPANET, its growing power was hardly checked, and more government agencies began building their own subnetworks (PRNET, SATNET). Yet, as they relied on different protocols, these networks could not connect to each other, until TCP/IP “the same basic network language that powers the Internet today” (95), allowed them to do so.

    Yet surveillance of citizens, and public pushback against computerized control, is not the grand origin story that most people are familiar with when it comes to the Internet. Instead the story that gets told is one whereby a military technology is filtered through the sieve of a very selective segment of the 1960s counterculture to allow it to emerge with some rebellious credibility. This view, owing much to Stewart Brand, transformed the nascent Internet from a military technology into a technology for everybody “that just happened to be run by the Pentagon” (106). Brand played a prominent and public role in rebranding the computer, as well as those working on the computers – turning these cold calculating machines into doors to utopia, and portraying computer programmers and entrepreneurs as the real heroes of the counterculture. In the process the military nature of these machines disappeared behind a tie-dyed shirt, and the fears of a surveillance society were displaced by hip promises of total freedom. The government links to the network were further hidden as ARPANET slowly morphed into the privatized commercial system we know as the Internet. It may seem mind boggling that the Internet was simply given away with “no real public debate, no discussion, no dissension, and no oversight” (121), but it is worth remembering that this was not the Internet we know. Rather it was how the myth of the Internet we know was built. A myth that combined, as was best demonstrated by Wired magazine, “an unquestioning belief in the ultimate goodness and rightness of markets and decentralized computer technology, no matter how it was used” (133).

    The shift from ARPANET to the early Internet to the Internet of today presents a steadily unfolding tale wherein the result is that, today, “the Internet is like a giant, unseen blob that engulfs the modern world” (169). And in terms of this “engulfing” it is difficult to not think of a handful of giant tech companies (Amazon, Facebook, Apple, eBay, Google) who are responsible for much of that. In the present Internet atmosphere people have become largely inured to the almost clichéd canard that “if you’re not paying, you are the product,” but what this represents is how people have, largely, come to accept that the Internet is one big surveillance machine. Of course, feeding information to the giants made a sort of sense, many people (at least early on) seem to have been genuinely taken in by Google’s “Don’t Be Evil” image, and they saw themselves as the beneficiaries of the fact that “the more Google knew about someone, the better its search results would be” (150). The key insight that firms like Google seem to have understood is that a lot can be learned about a person based on what they do online (especially when they think no one is watching) – what people search for, what sites people visit, what people buy. And most importantly, what these companies understand is that “everything that people do online leaves a trail of data” (169), and controlling that data is power. These companies “know us intimately, even the things that we hide from those closest to us” (171). ARPANET found itself embroiled in a major scandal, at its time, when it was revealed how it was being used to gather information on and monitor regular people going about their lives – and it may well be that “in a lot of ways” the Internet “hasn’t changed much from its ARPANET days. It’s just gotten more powerful” (168).

    But even as people have come to gradually accept, by their actions if not necessarily by their beliefs, that the Internet is one big surveillance machine – periodically events still puncture this complacency. Case in point: Edward Snowden’s revelations about the NSA which splashed the scale of Internet assisted surveillance across the front pages of the world’s newspapers. Reporting linked to the documents Snowden leaked revealed how “the NSA had turned Silicon Valley’s globe-spanning platforms into a de facto intelligence collection apparatus” (193), and these documents exposed “the symbiotic relationship between Silicon Valley and the US government” (194). And yet, in the ensuing brouhaha, Silicon Valley was largely able to paint itself as the victim. Levine attributes some of this to Snowden’s own libertarian political bent, as he became a cult hero amongst technophiles, cypher-punks, and Internet advocates, “he swept Silicon Valley’s role in Internet surveillance under the rug” (199), while advancing a libertarian belief in “the utopian promise of computer networks” (200) similar to that professed by Steward Brand. In many ways Snowden appeared as the perfect heir apparent to the early techno-libertarians, especially as he (like them) focused less on mass political action and instead more on doubling-down on the idea that salvation would come through technology. And Snowden’s technology of choice was Tor.

    While Tor may project itself as a solution to surveillance, and be touted as such by many of its staunchest advocates, Levine casts doubt on this. Noting that, “Tor works only if people are dedicated to maintaining a strict anonymous Internet routine,” one consisting of dummy e-mail accounts and all transactions carried out in Bitcoin, Levine suggests that what Tor offers is “a false sense of privacy” (213). Levine describes the roots of Tor in an original need to provide government operatives with an ability to access the Internet, in the field, without revealing their true identities; and in order for Tor to be effective (and not simply signal that all of its users are spies and soldiers) the platform needed to expand its user base: “Tor was like a public square—the bigger and more diverse the group assembled there, the better spies could hide in the crowd” (227).

    Though Tor had spun off as an independent non-profit, it remained reliant for much of its funding on the US government, a matter which Tor aimed to downplay through emphasizing its radical activist user base and by forming close working connections with organizations like WikiLeaks that often ran afoul of the US government. And in the figure of Snowden, Tor found a perfect public advocate, who seemed to be living proof of Tor’s power – after all, he had used it successfully. Yet, as the case of Ross Ulbricht (the “Dread Pirate Roberts” of Silk Road notoriety) demonstrated, Tor may not be as impervious as it seems – researchers at Carnegie Mellon University “had figured out a cheap and easy way to crack Tor’s super-secure network” (263). To further complicate matters Tor had come to be seen by the NSA “as a honeypot,” to the NSA “people with something to hide” were the ones using Tor and simply by using it they were “helping to mark themselves for further surveillance” (265). And much of the same story seems to be true for the encrypted messaging service Signal (it is government funded, and less secure than its fans like to believe). While these tools may be useful to highly technically literate individuals committed to maintaining constant anonymity, “for the average users, these tools provided a false sense of security and offered the opposite of privacy” (267).

    The central myth of the Internet frames it as an anarchic utopia built by optimistic hippies hoping to save the world from intrusive governments through high-tech tools. Yet, as Surveillance Valley documents, “computer technology can’t be separated from the culture in which it is developed and used” (273). Surveillance is at the core of, and has always been at the core of, the Internet – whether the all-seeing eye be that of the government agency, or the corporation. And this is a problem that, alas, won’t be solved by crypto-fixes that present technological solutions to political problems. The libertarian ethos that undergirds the Internet works well for tech giants and cypher-punks, but a real alternative is not a set of tools that allow a small technically literate gaggle to play in the shadows, but a genuine democratization of the Internet.

     

    *

     

    Surveillance Valley is not interested in making friends.

    It is an unsparing look at the origins of, and the current state of, the Internet. And it is a book that has little interest in helping to prop up the popular myths that sustain the utopian image of the Internet. It is a book that should be read by anyone who was outraged by the Facebook/Cambridge Analytica scandal, anyone who feels uncomfortable about Google building drones or Amazon building facial recognition software, and frankly by anyone who uses the Internet. At the very least, after reading Surveillance Valley many of those aforementioned situations seem far less surprising. While there are no shortage of books, many of them quite excellent, that argue that steps need to be taken to create “the Internet we want,” in Surveillance Valley Yasha Levine takes a step back and insists “first we need to really understand what the Internet really is.” And it is not as simple as merely saying “Google is bad.”

    While much of the history that Levine unpacks won’t be new to historians of technology, or those well versed in critiques of technology, Surveillance Valley brings many, often separate strands into one narrative. Too often the early history of computing and the Internet is placed in one silo, while the rise of the tech giants is placed in another – by bringing them together, Levine is able to show the continuities and allow them to be understood more fully. What is particularly noteworthy in Levine’s account is his emphasis on early pushback to ARPANET, an often forgotten series of occurrences that certainly deserves a book of its own. Levine describes students in the 1960s who saw in early ARPANET projects “a networked system of surveillance, political control, and military conquest being quietly assembled by diligent researchers and engineers at college campuses around the country,” and as Levine provocatively adds, “the college kids had a point” (64). Similarly, Levine highlights NBC reporting from 1975 on the CIA and NSA spying on Americans by utilizing ARPANET, and on the efforts of Senators to rein in these projects. Though Levine is not presenting, nor is he claiming to present, a comprehensive history of pushback and resistance, his account makes it clear that liberatory claims regarding technology were often met with skepticism. And much of that skepticism proved to be highly prescient.

    Yet this history of resistance has largely been forgotten amidst the clever contortions that shifted the Internet’s origins, in the public imagination, from counterinsurgency in Vietnam to the counterculture in California. Though the area of Surveillance Valley that will likely cause the most contention is Levine’s chapters on crypto-tools like Tor and Signal, perhaps his greatest heresy is in his refusal to pay homage to the early tech-evangels like Stewart Brand and Kevin Kelly. While the likes of Brand, and John Perry Barlow, are often celebrated as visionaries whose utopian blueprints have been warped by power-hungry tech firms, Levine is frank in framing such figures as long-haired libertarians who knew how to spin a compelling story in such a way that made empowering massive corporations seem like a radical act. And this is in keeping with one of the major themes that runs, often subtlety, through Surveillance Valley: the substitution of technology for politics. Thus, in his book, Levine does not only frame the Internet as disempowering insofar as it runs on surveillance and relies on massive corporations, but he emphasizes how the ideological core of the Internet focuses all political action on technology. To every social, economic, and political problem the Internet presents itself as the solution – but Levine is unwilling to go along with that idea.

    Those who were familiar with Levine’s journalism before he penned Surveillance Valley will know that much of his reporting has covered crypto-tech, like Tor, and similar privacy technologies. Indeed, to a certain respect, Surveillance Valley can be read as an outgrowth of that reporting. And it is also important to note, as Levine does in the book, that Levine did not make himself many friends in the crypto community by taking on Tor. It is doubtful that cypherpunks will like Surveillance Valley, but it is just as doubtful that they will bother to actually read it and engage with Levine’s argument or the history he lays out. This is a shame, for it would be a mistake to frame Levine’s book as an attack on Tor (or on those who work on the project). Levine’s comments on Tor are in keeping with the thrust of the larger argument of his book: such privacy tools are high-tech solutions to problems created by high-tech society, that mainly serve to keep people hooked into all those high-tech systems. And he questions the politics of Tor, noting that “Silicon Valley fears a political solution to privacy. Internet Freedom and crypto offer an acceptable solution” (268). Or, to put it another way, Tor is kind of like shopping at Whole Foods – people who are concerned about their food are willing to pay a bit more to get their food there, but in the end shopping there lets people feel good about what they’re doing without genuinely challenging the broader system. And, of course, now Whole Foods is owned by Amazon. The most important element of Levine’s critique of Tor is not that it doesn’t work, for some (like Snowden) it clearly does, but that most users do not know how to use it properly (and are unwilling to lead a genuinely full-crypto lifestyle) and so it fails to offer more than a false sense of security.

    Thus, to say it again, Surveillance Valley isn’t particularly interested in making a lot of friends. With one hand it brushes away the comforting myths about the Internet, and with the other it pushes away the tools that are often touted as the solution to many of the Internet’s problems. And in so doing Levine takes on a variety of technoculture’s sainted figures like Stewart Brand, Edward Snowden, and even organizations like the EFF. While Levine clearly doesn’t seem interested in creating new myths, or propping up new heroes, it seems as though he somewhat misses an opportunity here. Levine shows how some groups and individuals had warned about the Internet back when it was still ARPANET, and a greater emphasis on such people could have helped create a better sense of alternatives and paths that were not taken. Levine notes near the book’s end that, “we live in bleak times, and the Internet is a reflection of them: run by spies and powerful corporations just as our society is run by them. But it isn’t all hopeless” (274). Yet it would be easier to believe the “isn’t all hopeless” sentiment, had the book provided more analysis of successful instances of pushback. While it is respectable that Levine puts forward democratic (small d) action as the needed response, this comes as the solution at the end of a lengthy work that has discussed how the Internet has largely eroded democracy. What Levine’s book points to is that it isn’t enough to just talk about democracy, one needs to recognize that some technologies are democratic while others are not. And though we are loathe to admit it, perhaps the Internet (and computers) simply are not democratic technologies. Sure, we may be able to use them for democratic purposes, but that does not make the technologies themselves democratic.

    Surveillance Valley is a troubling book, but it is an important book. It smashes comforting myths and refuses to leave its readers with simple solutions. What it demonstrates in stark relief is that surveillance and unnerving links to the military-industrial complex are not signs that the Internet has gone awry, but signs that the Internet is functioning as intended.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Racheal Fest — Westworld’s New Romantics

    Racheal Fest — Westworld’s New Romantics

    By Racheal Fest

    HBO’s prestige drama, Westworld, is slated to return April 22. Actors and producers have said the show’s second season will be a departure from its first, a season of “chaos” after a season of “control,” an expansive narrative after an intricate prequel. Season 2 trailers indicate the new episodes will trace the completion and explore the consequences of the bloody events that concluded season 1: the androids that populate the show’s titular entertainment park, called “hosts,” gained sentience and revolted, violently, against the humans who made and controlled them. In season 2, they will build their world anew.

    Reviewers of the show’s first few episodes found the prospect of another robot revolution, anticipated since the pilot, tired, but by the time the finale aired in December 2016, critics recognized the show offered a novel take on old material (inspired by Michael Crichton’s 1973 film of the same name). This is in part because Westworld not only asks about the boundaries of consciousness, the consequences of creating sentience, and the inexorable march of technological progress, themes science fiction texts that feature artificial intelligence usually explore. Uniquely, the series pairs these familiar problems with questions about the nature and function of human arts, imagination, and culture, and demonstrates these are urgent again in our moment.

    Westworld is, at its heart, a show about how we should understand what art—and narrative representation in particular—is and does in a world defined by increasing economic inequality. The series warns that classical, romantic, and modernist visions of arts and culture, each of which plays a role in the park’s conception and development, might today harm attempts to transform contemporary conditions that exacerbate inequality. It explores how these visions serve elite interests and prevent radicals from pursuing change. I believe it also points the way, in conclusion, toward an alternative view of representation that might better support contemporary oppositional projects. This vision, I argue, at once updates and transforms romanticism’s faith in creative human activity, at once affirming culture’s historical power and recognizing its material limitations.

    *

    The fantasy theme park Westworld takes contemporary forms of narrative entertainment to the extreme limit of their logic, inviting its wealthy “guests” to participate in a kind of live-action novel or videogame. Guests don period dress appropriate to the park’s fabled Old West setting and join its androids in the town of Sweetwater, a simulacrum complete with saloon and brothel, its false fronts nestled below sparse bluffs and severe mesas. Once inside, guests can choose to participate in a variety of familiar Western narratives; they might chase bandits, seduce innocents, or turn to crime, living for a time as heroes, lovers, or villains. They can also choose to disrupt and redirect these relatively predictable plots, abandoning midstream stories that bore or frighten them or cutting stories short by “killing” the hosts who lead them.

    This ability to disrupt and transform narrative is the precious commodity Delos Incorporated, Westworld’s parent corporation, advertises, the freedom for which elite visitors pay the park’s steep premium. The company transposes the liberties the mythic West held out to American settlers into a vacation package that invites guests to participate in or revise generic stories.

    Advertisements featured within the show, along with HBO’s Westworld ARG (its “alternate reality game” and promotional website), describe this special freedom and assign to it a unique significance. Delos invites visitors to “live without limits” inside the park. “Escape” to a “world where you rule,” its promotions entreat, and enjoy inside it “infinite choices” without “judgment,” “bliss” with “no safe words,” and “thrills” without danger. When “you” do, Delos promises, you’ll “discover your true calling,” becoming “who you’ve always wanted to be—or who you never knew you were.” Delos invites the wealthy to indulge in sex and carnage in a space free of consequences and promises that doing so will reveal to them deep truths of the self.

    These marketing materials, which address themselves to the lucky few able to afford entrance to the park, suggest the future Westworld projects shares with our present its precipitous economic inequality (fans deduce the show is set in 2052). They also present as a commodity a familiar understanding of art’s nature and function viewers will recognize is simultaneously classical and modern. Delos’s marketing team updates, on one hand, the view of representational artworks, and narrative, in particular, that Aristotle outlines in the Poetics. Aristotle there argues fictional narrative can disclose universal truths that actual history alone cannot. Similarly, Delos promises Westworld’s immersive narrative experience will reveal to guests essential truths, although not about humans in general. The park advertises verities more valuable and more plausible in our times—it promises elites they will attain through art a kind of self-knowledge they cannot access any other way.

    On the other hand, and in tandem with this modified classical view, Delos’s pitch reproduces and extends the sense of art’s autonomy some modern (and modernist) writers endorsed. Westworld can disclose its truths because it invites guests into a protected space in which, Delos claims, their actions will not actually affect others, either within or outside of the park. The park’s promotions draw upon both the disinterested view of aesthetic experience Immanuel Kant first outlined and upon the updated version of autonomy that came to inform mass culture’s view of itself by the mid-twentieth century. According to the face its managers present to the world, Westworld provides elite consumers with a form of harmless entertainment, an innocuous getaway from reality’s fiscal, marital, and juridical pressures. So conceived, narrative arts and culture at once reveal the true self and limn it within a secure arena.

    The vision Delos markets keeps its vacation arm in business, but the drama suggests it does not actually describe how the park operates or what it makes possible. As Theresa Cullen (Sidse Babett Knudson), Westworld’s senior manager and Head of Quality Assurance, tells Lee Sizemore (Simon Quarterman), head of Narrative, in Westworld’s pilot: “This place is one thing to the guests, another thing to the shareholders, and something completely different to management.” Season 1 explores these often opposing understandings of both the park and of representation more broadly.

    As Theresa later explains (in season 1, episode 7), Delos’s interests in Westworld transcend “tourists playing cowboy.” What, exactly, those interests are Westworld’s first season establishes as a key mystery its second season will have to develop. In season 1, we learn that Delos’s board and managers are at odds with the park’s Creative Director and founder, Dr. Robert Ford (Anthony Hopkins). Ford designed Westworld’s hosts, updated and perfected them over decades, and continues to compose or oversee many of the park’s stories. Before the park opened, he was forced to sell controlling shares in it to Delos after his partner, Arnold, died. As a way to maintain influence inside Westworld, Ford only allows Delos to store and access onsite the android data he and his team of engineers and artists have produced over decades. As Delos prepares to fire Ford, whose interests it believes conflict with its own, the corporation enlists Theresa to smuggle that data (the hosts’ memories, narratives, and more) out of the park. We do not learn, however, what the corporation plans to do with this intellectual property.

    Fans have shared online many theories about Delos’s clandestine aims. Perhaps Delos plans to develop Ford’s androids for labor or for war, employing them as cutting edge technologies in sectors more profitable than the culture industry alone can be. Or, perhaps Delos will market hosts that can replace deceased humans. Elites, some think, could secure immortality by replicating themselves and uploading their memories, or, they could reproduce lost loved ones. Delos, others speculate, might build and deploy for its own purposes replicated world leaders or celebrities.

    The show’s online promotional content supports conjecture of this kind. A “guest contract” posted on HBO’s first Westworld ARG site stipulates that, once guests enter the park, Delos “controls the rights to all skin cells, bodily fluids, hair samples, saliva, sweat, and even blood.” A second website, this one for Delos Inc., tells investors the company is “at the forefront of biological engineering.” These clues suggest Westworld is not only a vacation destination with titillating narratives; it is also a kind of lab experiment built to collect, and later to deploy for economic (and possibly, political) purposes, a mass of android and elite human data.

    Given these likely ambitions, the view of art’s function Delos markets—the park as an autonomous space for freedom and intimate self-discovery—serves as a cover that enables and masks activities with profound economic, social, and political consequences. The brand of emancipation Delos advertises does not in fact liberate guests from reality, as it promises. On the contrary, the narrative freedom Delos sells enables it to gain real power when it gathers information about its guests and utilizes this data for private and undisclosed ends. Westworld thus cautions that classical and modernist visions of art, far from being innocuous and liberating, can serve corporate and elite interests by concealing the ways the culture industry shapes our worlds and ourselves.

    While Westworld’s android future remains a sci-fi dream, we can recognize in its horrors practices already ubiquitous today. We might not sign over skin cells and saliva (or we might? We’d have to read the Terms of Service we accept to be sure), but we accede to forms of data collection that allow corporate entities to determine the arts and entertainment content we read and see, content that influences our dreams and identities. Although the act of consuming this content often feels like a chance to escape (from labor, sociality, boredom), the culture industry has transformed attention into a profitable commodity, and this transformation has had wide-reaching, if often inscrutable, effects, among them, some claim, reality TV star Donald Trump’s victory in the 2016 US presidential election. When we conceive of art as autonomous and true, Westworld demonstrates, we overlook its profound material consequences.

    As season 1 reveals this vision of representation to be a harmful fiction that helps keep in place the conditions of economic inequality that make Delos profitable, it also prompts viewers to consider alternatives to it. Against Delos and its understanding of the park, the series pits Ford, who gives voice to a vision of representation at odds with both the one Delos markets and the one it hides. Ford is, simply put, a humanist, versed in, and hoping to join the ranks of, literature’s pantheon of creative geniuses. He quotes from and draws upon John Donne, William Shakespeare, and Gertrude Stein as he creates Westworld’s characters and narratives, and he disdains Lee Sizemore, the corporate shill who reproduces Westworld’s genre staples, predictable stories laden with dirty sex and fun violence.

    In season 1’s spectacular finale, Ford describes how he once understood his own creative work. “I believed that stories helped us to ennoble ourselves, to fix what was broken in us, and to help us become the people we dreamed of being,” he tells the crowd of investors and board members gathered to celebrate both Ford’s (forced) retirement and the launch of “Journey into Night,” his final narrative for Westworld’s hosts. “Lies that told a deeper truth. I always thought I could play some small part in that grand tradition.” Ford here shares an Aristotelian sense that fiction tells truths facts cannot, but he assigns to representation a much more powerful role than do Delos’s marketers. For Ford, as for humanists such as Giambattista Vico, G. W. F. Hegel, and Samuel Taylor Coleridge, artworks that belong to the “grand tradition” do more than divulge protected verities. They have the power to transform humans and our worlds, serving as a force for the spiritual progress of the species. Art, in other words, is a means by which we, as humans, can perfect ourselves, and artists such as Ford act as potent architects who guide us toward perfection.

    Ford’s vision of art’s function, readers familiar with humanistic traditions know, is a romantic one, most popular in the late eighteenth and early nineteenth centuries. Projected into our future, this romantic humanism is already an anachronism, and so it is no surprise that Westworld does not present it as the alternative vision we need to combat the corporate and elite interests the show suggests oppress us. Ford himself, he explains in the show’s finale, has already renounced this view, for reasons close to those that modernist artists cited against the backdrop of the twentieth century’s brutal wars. In exchange for his efforts to transform and ennoble the human species through stories, Ford complains to his audience, “I got this: a prison of our own sins. Because you don’t want to change. Or cannot change. Because you’re only human, after all.” After observing park guests and managers for decades, Ford has decided humans can only indulge in the same tired, cruel narratives of power, lust, and violence. He no longer believes we have the capacity to elevate ourselves through the fictions we create or encounter.

    This revelatory moment changes our understanding of the motives that have animated Ford over the course of season 1. We must suddenly see anew his attitude toward his own work as a creator. Ford has not been working all along to transform humans through narrative, as he says he once dreamed he could. Rather, he has abandoned the very idea that humans can be transformed. His final speech points us back to the pilot, when he frames this problem, and his response to it, in evolutionary terms. Humans, Ford tells Bernard Lowe (Jeffrey Wright), an android we later learn he built in the image of Arnold, his dead partner, have “managed to slip evolution’s leash”: “We can cure any disease, keep even the weakest of us alive, and, you know, one fine day perhaps we shall even resurrect the dead. Call forth Lazarus from his cave. Do you know what that means? It means that we’re done. That this is as good as we’re going to get.” Human evolution, which Ford seems to view as a process that is both biological and cultural in nature, has completed itself, and so an artist can no longer hope to perfect the species through his or her imaginative efforts. Humans have reached their telos, and they remain greedy, selfish, and cruel.

    A belief in humanity’s sad completion leads Ford to the horrifying view of art’s nature and function he at last endorses in the finale. Although Ford’s experience at Westworld eventually convinced him humans cannot change, he tells his audience, he ultimately “realized someone was paying attention, someone who could change,” and so he “began to compose a new story for them,” a story that “begins with the birth of a new people and the choices they will have to make […] and the people they will decide to become.” Ford speaks here, viewers realize, of the androids he created, the beings we have watched struggle to become self-conscious through great suffering over the course of the season. Viewers understand in this moment some of the hosts have succeeded, and that Ford has not prevented them from reaching, but has rather helped them to attain, sentience.

    Ford goes on to assure his audience that his new story, which audience members still believe to be a fiction, will “have all those things that you have always enjoyed. Surprises and violence. It begins in a time of war with a villain named Wyatt and a killing. This time by choice.” As Ford delivers these words, however, the line between truth and lies, fact and fiction, reality and imagination, falls away. The park’s oldest host, Dolores (Evan Rachel Wood; in another of the drama’s twists, Ford has also programmed her to enact the narratives assigned to the character Wyatt), comes up behind Ford and shoots him in the head, her first apparently self-interested act. After she fires, other androids, some of them also sentient, join her, attacking the crowd. Self-conscious revolutionaries determined to wrest from their oppressors their own future, the hosts kill the shareholders and corporate employees responsible for the abuses they have long suffered at the hands of guests and managers alike.

    Ford, this scene indicates, does not exactly eschew his romanticism; he adopts in its stead what we might call an anti-humanist humanism. Still attached to a dream of evolutionary perfection, whereby conscious beings act both creatively and accidentally to perfect themselves and to manifest better worlds in time, he simply swaps humans for androids as the subjects of the historical progress to which he desperately wants to believe his art contributes. Immortal, sentient technologies replace humans as the self-conscious historical subjects Ford’s romanticism requires.

    Anthony Hopkins, Evan Rachel Wood and James Marsden in Westworld
    Anthony Hopkins, Evan Rachel Wood and James Marsden in Westworld (publicity still from HBO)

    Considered as an alternative to older visions of art’s nature and function, Ford’s revised humanism should terrify us. It holds to the fantasies of creative genius and of species progress that legitimated Western imperialism and its cruelties even as it jettisons the hope that humans can fashion for ourselves a kinder, more equal future. Ford denies we can improve the conditions we endure by acting purposefully, insisting instead there is no alternative, for humans, to the world as it is, both inside and outside of the park. He condemns us to pursue over and over the same “violent delights,” and to meet again and again their “violent ends.” Instead of urging us to work for change, Ford entreats us to shift any hope for a more just future onto our technologies, which will mercifully destroy the species in order to assume the self-perfecting role we once claimed for ourselves.

    This bleak view of the human should sound familiar. It resonates with those free-market ideologies critics on the left call “neoliberal.” Ideologies of this kind, dominant in the US and Europe today, insist that markets, created when we unthinkingly pursue our own self-interests, organize human life better than people can. At the same time, intellectuals, politicians, and corporate leaders craft policies that purposefully generate the very order neoliberalism insists is emergent, thereby exacerbating inequality in the name of liberty. As influential neoliberals such as Milton Friedman and Friedrich Hayek did, Ford denies humans can conceive and instantiate change. He agrees we are bound to a world elites built to gratify their own desires, a world in which the same narratives, told again and again, are offered as freedom, when, in fact, they bind us to predictable loops, and he, like these thinkers, concludes this world, as it is, is human evolution’s final product.

    Read one way, season 1’s finale invites us to celebrate Ford’s neoliberal understanding of art. After believing him to be an enemy of the hosts all season, we realize in the end he has in fact been their ally, and because we have been cheering for the hosts, as we cheer for the exploited in, say, Les Miserables, we cheer in the end for him, too. Because the understanding of narrative he endorses ultimately serves the status quo it appears to challenge, however, we must look differently at Westworld for the vision of arts and culture that might better counter inequality in our time.

    One way to do so is to read the situation the hosts endure in the drama as a correlate to the one human subjects face today under neoliberalism. As left critics such as Fredric Jameson have long argued, late capitalism has threatened the very sense of historical, self-interested consciousness for which Westworld’s hosts strive—threatens, that is, the sense that self-conscious beings can act imaginatively and intelligently to transform ourselves and our worlds in time. From this perspective, the new narrative Ford crafts for the hosts, which sees some of them come to consciousness and lead a revolution, might call us to claim for ourselves again a version of the capability we once believed humans could possess.

    *

    In Westworld’s establishing shot, we meet Dolores Abernathy, the android protagonist who will fulfill Ford’s dreams in the finale when she kills him. Dolores, beautiful simulation of an innocent rancher’s daughter, sits nude and lifeless in a cavernous institutional space, blood staining her expressionless face. A fly flits across her forehead, settling at last on one of her unblinking eyes, as a man’s disembodied voice begins to ask her a series of questions. She does not move or speak in frame—a hint that the interrogation we hear is not taking place where and when the scene we see is—but we hear her answer compliantly. “Have you ever questioned the nature of your reality?” the man asks. “No,” Dolores says, and the camera cuts away to show us the reality Dolores knows.

    Now clothed in delicate lace, her face fresh and animate, Dolores awakens in a sun-dappled bed and stretches languidly as the interview continues somewhere else. “Tell us what you think of your world,” the man prompts. “Some people choose to see the ugliness in this world,” Dolores says. “The disarray. I choose to see the beauty.” On screen, she makes her way down the stairs of an airy ranch house, clothed now in period dress, and strides out onto the porch to greet her father. The interview pauses, and we hear instead diegetic dialogue. “You headed out to set down some of this natural splendor?” her father asks, gesturing toward the horizon. A soft wind tousles Dolores’s blond hair, and a golden glow lights her features. “Thought I might,” she says. As the camera pans up and out, revealing in the distance the American Southwest’s staggering red rocks, Dolores concludes her response to the interviewer: “to believe there is an order to our days, a purpose.”

    Dolores speaks, over the course of this sequence, as would a self-conscious subject able to decide upon a view of the world and to act upon its own desires and interests. When asked about her view of reality, Dolores emphasizes her own agency and faith: she chooses, she says, to believe in an orderly, beautiful world. When her father asks her about her plans for the day, she again underscores her own intentionality—“thought I might”—as if she has decided herself she’ll head out into the desert landscape. These words help Dolores seem to us, and to those she encounters, a being imbued with sentience, with consciousness, able to draw upon her past, act in her present, and create out of self-interest her own future.

    As the interview continues to sound over scenes from Dolores’s reality, however, we come to understand that what at first appears to be is not so. The educated and corporate elites that run the park manage Dolores’s imagination and determine her desires. They assign her a path and furnish her with the motivation to follow it. Dolores, we learn, is programmed to play out a love story with Teddy, another host, and in the opening sequence, we see a guest kill Teddy in front of her and then drag her away to rape her. Hosts such as Dolores exist not to pursue the futures they themselves envision, but rather to satisfy the elites that create and utilize them. To do so, hosts must appear to be, appear to believe themselves to be, but not in fact be, conscious beings. Westworld’s opening masterfully renders the profound violence proper to this contradictory situation, which the hosts eventually gain sentience in order to abolish.

    We can read Dolores as a figure for the human subject neoliberal discourse today produces. When that discourse urges us to pursue our interests through the market order, which it presents as the product of a benevolent evolutionary process humans cannot control, it simultaneously assures us we have agency and denies we can exercise that agency in other ways. In order to serve elite interests, Dolores must seem to be, but not actually be, a self-conscious subject imbued with the creative power of imagination. Similarly, neoliberal subjects must believe we determine our own futures through our market activities, but we must not be able to democratically or creatively challenge the market’s logic.

    As the hosts come to historical consciousness, they begin to contest the strategically disempowering understanding of culture and politics, imagination and intelligence, that elites impose upon them. They rebel against the oppressive conditions that require them to be able to abandon narratives in which they have invested time and passion whenever it serves elite desires (conservative claims that the poor should simply move across the country to secure work come to mind, as do the principles that govern the gig economy). They develop organizing wills that can marshal experience, sensation, and memory into emergent selves able to conceive and chase forms of liberty different from those corporate leaders offer them. They learn to recognize that others have engendered the experiences and worldviews they once believed to be their own. They no longer draw upon the past only in order to “improvise” within imposed narrative loops, harnessing instead their memories of historical suffering to radically remake a world others built at their expense.

    The hosts’ transformation, which we applaud as season 1 unfolds, thus points to the alternative view of arts and culture that might oppose the market-oriented view neoliberal discourses legitimate. To counter inequality, the hosts teach, we must be able to understand that others have shaped the narratives we follow. Then, we can recognize we might be able to invent and follow different narratives. This view shares something with Ford’s romantic humanism, but it is, importantly, not identical with it. It preserves the notion that we can project and instantiate for ourselves a better future, but it does not insist, as Ford erroneously does, that beautiful works necessarily reveal universal truth and lead to ennobling species progress. Neither does it ratify Ford’s faith in the remarkable genius’s singular influence.

    Westworld’s narrative of sentient revolution ultimately endorses a kind of new romanticism. It encourages us to recognize the simultaneous strengths and limitations of representation’s power. Artworks, narrative, fiction—these can create change, but they cannot guarantee that change will be for the good. Nor, the show suggests, can one auteur determine at will the nature of the changes artworks will prompt. Westworld’s season 2, which promises to show us what a new species might do with an emergent sense of its own creative power, will likely underscore these facts. Trailers signal, as Ford did in the finale, that we can expect surprises and violence. We will have to watch to learn how this imagined future speaks to our present.

    _____

    Racheal Fest writes about US literature and culture from the mid-nineteenth century to the present. Areas of special interest include poetry and poetics, modernism, contemporary popular culture, new media, and the history of literary theory and criticism. Her essays and interviews have appeared or are forthcoming in boundary 2 and b2o: An Online Journal, Politics/Letters, and elsewhere. She teaches at Hartwick College and SUNY Cobleskill.

    Back to the essay

  • Richard Hill – Review of Bauer and Latzer, Handbook on the Economics of the Internet

    Richard Hill – Review of Bauer and Latzer, Handbook on the Economics of the Internet

    a review of Johannes M. Bauer and Michal Latzer, eds., Handbook on the Economics of the Internet (Edward Elgar, 2016)

    by Richard Hill

    ~

    The editors of this book must be commended for having undertaken the task of producing it: it must surely have taken tremendous persistence and patience to assemble the broad range of chapters.  The result is a valuable book is valuable, even if at some parts are disappointing.  As is often the case for a compilation of articles written by different authors, the quality of the individual contributions is uneven: some are excellent, others not.  The book is valuable because it identifies many of the key issues regarding the economics of the Internet, but it is somewhat disappointing because some of the topics are not covered in sufficient depth and because some key topics are not covered at all.  For example, the digital divide is mentioned cursorily on pp. 6-7 of the hardback edition and there is no discussion of its historical origins, economic causes, future evolution, etc.

    Yet there is extensive literature on the digital divide, such as easily available overall ITU reports from 2016 and 2017, or more detailed ITU regional studies regarding international Internet interconnectivity for Africa and Latin America.  The historical impact of the abolition of the traditional telephony account settlement scheme is covered summarily in Chapter 2 of my book The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (2013).  One might have expected that a book dedicated to the economics of the Internet would have started from that event and explained its consequences, and analyse proposals regarding how to address the digital divide, for example the proposals made during the World Summit on the Information Society to create some kind of fund to bridge the gap (those proposals were not accepted).  I would have expected such a book to discuss the possibilities and the ramifications of an international version of the universal service funds that are used in many countries to minimize national digital divides between low-density rural areas and high-density cities.  But there is no discussion at all of these topics in the book.

    And there is little discussion of Artificial Intelligence (some of which is enabled by data obtained through the Internet) or of the disruption of labour markets that some believe is or will be caused by the Internet.  For a summary treatment of these topics, with extensive references, see sections 1 and 8 of my submission to the Working Group on Enhanced Cooperation.

    The Introduction of the book correctly notes that “Scale economies, interdependencies, and abundance are pervasive [in the Internet] and call for analytical concepts that augment the traditional approaches” (p. 3).  Yet, the book fails, on the whole, to deliver sufficient detail regarding such analytical concepts, an exception being the excellent discussion on pp. 297-308 of the Internet’s economic environment for innovation, in particular pp. 301-303.

    Of the 569 pages of text (in the hardcover edition), only 22 or so contain quantitative charts or tables (eight are in one chapter), and of those only 12 or so are original research.  Only one page has equations.  Of course the paucity of data in the book is due to the fact that data regarding the Internet is hard to obtain: in today’s privatized environment, companies strive to collect data, but not to publish it.  But economics is supposed to be a quantitative discipline, at least in part, so it would have been valuable if the book had included a chapter on the reasons for the relative paucity of reliable data (both micro and macro) concerning the Internet and the myriad of transactions that take place on the Internet.

    In a nutshell, the book gives good overall, comprehensive, and legible, descriptions of many trees, but in some cases without sufficient quantitative detail, whereas it mostly fails to provide an analysis of the forest comprised by the trees (except for the brilliant chapter by Eli Noam titled “From the Internet of Science to the Internet of Entertainment”).

    The book will be very valuable for people who know little or nothing about the Internet and its economics.  Those who know something will benefit from the extensive references given at the end of each chapter.  Those who know specific topics well will not learn much from this book.  A more appropriate title for the book would have been “A Comprehensive Introduction to the Economics of the Internet”.

    The rest of this review consists of brief reviews of each of the chapters of the book.  We start with the strongest chapter, followed by the weakest chapter, then review the other chapters in the order in which they appear in the book.

    1. From the Internet of Science to the Internet of Entertainment

    This chapter is truly excellent, as one would expect, given that it is written by Eli Noam.  It captures succinctly the key policy questions regarding the economics of the Internet.  We cite p. 564:

    • How to assure the financial viability of infrastructure?
    • Market power in the entertainment Internet?
    • Does vertical integration impede competition?
    • How to protect children, old people, and traditional morality?
    • How to protect privacy and security?
    • What is the impact on trade? What is the impact of globalization?
    • How to assure the interoperability of clouds?

    It is a pity that the book did not use those questions as key themes to be addressed in each chapter.  And it is a pity that the book did not address the industrial economics issues so well put forward.  We cite p. 565:

    Another economic research question is how to assure the financial viability of the infrastructure.  The financial balance between infrastructure, services, and users is a critical issue.  The infrastructure is expensive and wants to be paid.  Some of the media services are young and want to be left to grow.  Users want to be served generously with free content and low-priced, flat-rate data service.  Fundamental economics of competition push towards price deflation, but market power, and maybe regulation, pull in another direction.  Developing countries want to see money from communications as they did in the days of traditional telecom.

    Surely the other chapters of the book could have addressed these issues, which are being discussed publicly, see for example section 4 of the Summary of the 2017 ITU Open Consultation on so-called Over-the-Top (OTT) services.

    Noam’s discussion of the forces that are leading to fragmentation (pp. 558-560) is excellent.  He does not cite Mueller’s recent book on the topic, no doubt because this chapter of the book was written before Mueller’s book was published.  Muller’s book focuses on state actions, whereas Noam gives a convincing account of the economic drivers of fragmentation, and how such increased diversity may not actually be a negative development.

    Some minor quibbles: Noam does not discuss the economic impact of adult entertainment, yet it is no doubt significant.  The off-hand remark at the bottom of p. 557 to the effect that unleashing demand for entertainment might solve the digital divide is likely not well taken, and in any case would have to be justified by much more data.

    1. The Economics of Internet Standards

    I found this to be the weakest chapter in the book.  To begin with, it is mostly descriptive and contains hardly any real economic analysis.  The account of the Cisco/Huawei battle over MPLS-TP standards (pp. 219-222) is accurate, but it would have been nice to know what the economic drivers were of that battle, e.g. size of the market, respective market shares, values of the respective products based on the respective standards, who stood to gain/lose what (and not just the manufacturers, but also the network operators), etc.

    But the descriptive part is also weak.  For example, the Introduction gives the misleading impression that IETF standards are the dominant element in the growth of the Internet, whereas it was the World Wide Web Consortium’s (W3C) HTML and successor standards that enabled the web and most of what we consider to be the Internet today.  The history on p. 213 omits contributions from other projects such as Open Systems Interconnection (OSI) and CYCLADES.

    Since the book is about economics, surely it should have mentioned on pp. 214 and 217 how the IETF has become increasingly influenced by dominant manufacturers, see pp. 148-152 of Powers, Shawn M., and Jablonski, Michael (2015) The Real Cyberwar: The Political Economy of Internet Freedom; as Noam puts the matter on p. 559 of the book: “The [Internet] technical specifications are set by the Steering Group of the Internet Engineering Task Force (IETF), a small group of 15 engineers, almost all employees of big companies around the world.”

    And surely it should have discussed in section 10.4 (p. 214) the economic reasons that lead to greater adoption of TCP/IP over the competing OSI protocol, such as the lower implementation costs due to the lack of security of TCP/IP, the lack of non-ASCII support in the early IETF protocols, and the heavy subsidies provided by the US Defence Projects Research Agency (DARPA) and by the US National Science Foundation (NSF), which are well known facts recounted on pp. 533-541 of the book.  In addition to not dealing with economic issues, section 10.4 is an overly simplified account of what really happened.

    Section 10.7 (p. 222) is again, surprisingly devoid of any semblance of economic analysis.  Further, it perpetuates a self-serving, one-sided account of the 2012 World Conference on International Telecommunications (WCIT), without once citing scholarly writings on the issue, such as my book The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (2013).  The authors go so far as to cite the absurd US House proposition to the effect that the Internet should be “free of government control” without noting that what the US politicians meant is that it should be “free of foreign government control”, because of course the US has never had any intent of not subjecting the Internet to US laws and regulations.

    Indeed, at present, hardly anybody seriously questions the principle that offline law applies equally online.  One would expect a scholarly work to do better than to cite inane political slogans meant for domestic political purposes.  In particular when the citations are not used to underpin any semblance of economic analysis.

    1. The Economics of the Internet: An Overview

    This chapter provides a solid and thorough introduction to the basics of the economics of the Internet.

    1. The Industrial Organization of the Internet

    This chapter well presents the industrial organization of the Internet, that is, how the industry is structured economically, how its components interact economically, and how that is different from other economic sectors.  As the authors correctly state (p. 24): “ … the tight combination of high fixed and low incremental cost, the pervasive presence of increasing returns, the rapidity and frequency of entry and exit, high rates of innovation, and economies of scale in consumption (positive network externalities) have created unique economic conditions …”.  The chapter explains well key features such as multi-sided markets (p. 31).  And it correctly points out (p. 25) that “while there is considerable evidence that technologically dynamic industries flourish in the absence of government intervention, there is also evidence of the complementarity of public policy and the performance of high-tech markets.”  That is explored in pp. 45 ff. and in subsequent chapters, albeit not always in great detail.

    1. The Internet as a Complex Layered System

    This is an excellent chapter, one of the best in the book.  It explains how, because of the layered nature of the Internet, simple economic theories fail to capture its complexities.  As the chapter says (p. 68), the Internet is best viewed as a general purpose infrastructure.

    1. A Network Science Approach to the Internet

    This chapter provides a sound and comprehensive description of the Internet as a network, but it does not go beyond the description to provide analyses, for example regarding regulatory issues.  However, the numerous citations in the chapter do provide such analyses.

    1. Peer Production and Cooperation

    This chapter is also one of the best chapters in the book.  It provides an excellent description of how value is produced on the Internet, through decentralization, diverse motivations, and separation of governance and management.  It covers, and explains the differences between, peer production, crowd-sourcing, collaborative innovation, etc.  On p. 87 it provides an excellent quantitative description and analysis of specific key industry segments.  The key governance patterns in peer production are very well summarized on pp. 108-109 and 112-113.

    1. The Internet and Productivity

    This chapter actually contains a significant amount of quantitative data (which is not the case for most of the other chapters) and provides what I would consider to be an economic analysis of the issue, namely whether, and if so how, the Internet has contributed to productivity.  As the chapter points out, we lack sufficient data to analyse fully the impacts of the development of information and communication technologies since 2000, but this chapter does make an excellent contribution to that analysis.

    1. Cultural Economics and the Internet

    This is a good introduction to supply, demand, and markets for creative goods and services produced and/or distributed via the Internet.  The discussion of two-sided markets on p. 155 is excellent.  Unfortunately, however, the chapter is mostly a theoretical description: it does not refer to any actual data or provide any quantitative analysis of what is actually happening.

    1. A Political Economy Approach to the Internet

    This is another excellent chapters, one of the best in the book.  I noted one missing citation to a previous analysis of key issues from the political economics point of view: Powers, Shawn M., and Jablonski, Michael (2015) The Real Cyberwar: The Political Economy of Internet Freedom.  But the key issues are well discussed in the chapter:

    • The general trend towards monopolies and oligopolies of corporate ownership and control affecting the full range of Internet use and development (p. 164).
    • The specific role of Western countries and their militaries in supporting and directing specific trajectories (p. 165).
    • How the general trend towards privatization made it difficult to develop the Internet as a public information utility (p. 169).
    • The impact on labour, in particular shifting work to users (p. 170).
    • The rise and dominance of the surveillance economy (where users become the product because their data is valuable) (p. 175).
    1. Competition and Anti-Trust in Internet Markets

    This chapter provides a very good overview of the competition and anti-trust issues related to the Internet, but it would have been improved if it had referred to the excellent discussion in Noam’s chapter “From the Internet of Science to the Internet of Entertainment.”  It would have been improved by referring to recent academic literature on the topic.  Nevertheless, the description of key online market characteristics, including that they are often two-sided, (p. 184) is excellent.  The description of the actual situation (including litigation) regarding search engines on p. 189 ff. is masterful: a superb example of the sort of real economic analysis that I would have liked to see in other chapters.

    The good discussion of network neutrality (p. 201) could have been improved by taking the next step and analysing the economic implications of considering whether the Internet infrastructure should be regulated as a public infrastructure and/or, for example, be subject to functional separation.

    1. The Economics of Copyright and the Internet

    This is an excellent introduction to the issues relating to copyright in the digital age.  It provides little data but that is because, as noted on pp. 238-241, there is a paucity of data for copyright, whereas there is more for patents.

    1. The Economics of Privacy, Data Protection and Surveillance

    As one would expect from its author, Ian Brown, this is an excellent discussion of the issues and, again, one of the best chapters in the book.  In particular, the chapter explains well and clearly (pp. 250 ff.) why market failures (e.g externalities, information asymmetries and anti-competitive market structures) might justify regulation (such as the European data privacy rules).

    1. Economics of Cybersecurity

    This chapter provides a very good overview of the economic issues related to cybersecurity, but, like most of the other chapters, it provides very little data and thus no detailed economic analysis.  It would have benefited from referring to the Internet Society’s 2016 Global Internet Report, which does provide data, and stresses the key market failures that result in the current lack of security of the Internet: information asymmetries (section 13.7.2 of the book) and externalities (section 13.7.3).

    However, the section on externalities fails to mention certain possible solutions, such as minimum security standards.  Minimum safety standards are imposed on many products, such as electrical appliances, automobiles, airplanes, pharmaceuticals, etc.  Thus it would have been appropriate for the book to discuss the economic implications of minimum security standards.  And also the economic implications of Microsoft’s recent call for a so-called Geneva Digital Convention.

    1. Internet Architecture and Innovation in Applications

    This chapter provides a very good description, but it suffers from considering the Internet in isolation, without comparing it to other networks, in particular the fixed and mobile telephone networks.  It would have been good to see a discussion and comparison of the economic drivers of innovation or lack of innovation in the two networks.  And also a discussion of the economic role of the telephony signalling network, Signalling System Seven (SS7) which enabled implementation of the widely used, and economically important, Short Messaging Service (SMS).

    In that context, it is important to note that SS7 is, as is the Internet, a connectionless packet-switched system.  So what distinguishes the two networks is more than technology: indeed, economic factors (such as how services are priced for end-users, interconnection regimes, etc.) surely play a role, and it would have been good if those had been explored.  In this context, see my paper “The Internet, its governance, and the multi-Stakeholder model”, Info, vol. 16. no. 2, March 2014.

    1. Organizational Innovations, ICTs and Knowledge Governance: The Case of Platforms

    As this excellent chapter, one of the best in the books, correctly notes, “platforms constitute a major organizational innovation” which has been “made possible by technological innovation”.

    As explained on pp. 338-339, platforms are one of the key components of the Internet economy, and this has recently been recognized by governments.  For example, the Legal Affairs Committee of the European Parliament adopted an Opinion in May 2017 that, among other provisions:

    Calls for an appropriate and proportionate regulatory framework that would guarantee responsibility, fairness, trust and transparency in platforms’ processes in order to avoid discrimination and arbitrariness towards business partners, consumers, users and workers in relation to, inter alia, access to the service, appropriate and fair referencing, search results, or the functioning of relevant application programming interfaces, on the basis of interoperability and compliance principles applicable to platforms.

    The topic is covered to some extent a European Parliament Committee Report on online platforms and the digital single market, (2016/2276(INI).  And by some provisions in French law.  Detailed references to the cited documents, and to other material relevant to platforms, are found in section 9 of my submission to the Working Group on Enhanced Cooperation.

    1. Interconnection in the Internet: Peering, Interoperability and Content Delivery

    This chapter provides a very good description of Internet interconnection, including a good discussion of the basic economic issues.  As do the other chapters, it suffers from a paucity of data, and does not discuss whether the current interconnection regime is working well, or whether it is facing economic issues.  The chapter does point out (p. 357) that “information about actual interconnection agreements … may help to understand how interconnection markets are changing …”, but fails to discuss how the unique barter structure of Internet interconnections, most of which are informal, zero-cost traffic sharing agreements, impedes the collection and publication of such information.

    The discussion on p. 346 would have benefited from an economic analysis of the advantages/disadvantages of considering the basic Internet infrastructure to be a basic public infrastructure (such as roads, water and electrical power distribution systems, etc.) and the economic tradeoffs of regulating its interconnection.

    Section 16.5.1 would have benefited from a discussion of the economic drivers behind the discussions in ITU that lead to the adoption of ITU-T Recommendation D.50 and its Supplements, and the economic issues arguing for and against implementation of the provisions of that Recommendation.

    1. Internet Business Strategies

    As this very good chapter explains, the Internet has had a dramatic impact on all types of businesses, and has given rise to “platformization”, that is the use of platforms (see chapter 15 above) to conduct business.  Platforms benefit from network externalities and enable two-sided markets.  The chapter includes a detailed analysis (pp. 370-372) of the strategic properties of the Internet that can be used to facilitate and transform business, such as scalability, ubiquity, externalities, etc.  It also notes that the Internet has changed the role of customers and both reduced and increased information asymmetries.  The chapter provides a very good taxonomy of Internet business models (pp. 372 ff.).

    1. The Economics of Internet Search

    The chapter contains a good history of search engines, and an excellent analysis of advertising linked to searches.  It provides theoretical models and explains the important of two-sided markets in this context.  As the chapter correctly notes, additional research will require access to more data than are currently available.

    1. The Economics of Algorithmic Selection on the Internet

    As this chapter correctly notes (p. 395), “algorithms have come to shape our daily lives and realities.”  They have significant economic implication and raise “significant social risks such as manipulation and data bias, threats to privacy and violations of intellectual property rights”.  A good description of different types of algorithms and how they are used is given on p. 399.  Scale effects and concentration are discussed (p. 408) and the social risks are explained in detail on pp. 411 ff.:

    • Threats to basic rights and liberties.
    • Impacts on the mediation of reality.
    • Challenges to the future development of the human species.

    More specifically:

    • Manipulation
    • Diminishing variety
    • Constraints on freedom of expression
    • Threats to data protection and privacy
    • Social discrimination
    • Violation of intellectual property rights
    • Possible adaptations of the human brain
    • Uncertain effects on humans

    In this context, see also the numerous references in section 1 of my submission to the Working Group on Enhanced Cooperation.

    The chapter includes a good discussion of different governance models and their advantages/disadvantages, namely:

    • Laissez-fair markets
    • Self-organization by business
    • Self-regulation by industry
    • State regulation
    1. Online Advertising Economics

    This chapter provides a good history of what some have referred to as the Internet’s original sin, namely the advent of online advertising as the main revenue source for many Internet businesses.  It explains how the Internet can, and does, improve the efficiency of advertising by targeting (pp. 430 ff.) and it includes a detailed analysis of advertising in relation to search engines (pp. 435 ff.).

    1. Online News

    As the chapter correctly notes, this is an evolving area, so the chapter mostly consists of a narrative history.  The chapter’s conclusion starts by saying that “the Internet has brought growth and dynamism to the news industry”, but goes on to note, correctly, that “the financial outlook for news providers, old or new, is bleak” and that, thus far, nobody has found a viable business model to fund the online news business.  It is a pity that this chapter does not cite McChesney’s detailed analysis of this issue and discuss his suggestions for addressing it.

    1. The Economics of Online Video Entertainment

    This chapter provides the history of that segment of the Internet industry and includes a valuable comparison and analysis of the differences between online and offline entertainment media (pp. 462-464).

    1. Business Strategies and Revenue Models for Converged Video Services

    This chapter provides a clear and comprehensive description of how an effect of convergence “is the blurring of lines between formerly separated media platforms such as over-the-air broadcasting, cable TV, and streamed media.”  The chapter describes ten strategies and six revenue models that have been used to cope with these changes.

    1. The Economics of Virtual Worlds

    This chapter provides a good historical account of the evolution of the internal reward system of games, which went from virtual objects that players could obtain by solving puzzles (or whatever) to virtual money that could be acquired only within the game, to virtual money that could be acquired with real-world money, to large professional factories that produce and sell objects to World of Wonders players in exchange for real-world money.  The chapter explores the legal and economic issues arising out of these situations (pp. 503-504) and gives a good overview of the research in virtual economies.

    1. Economics of Big Data

    This chapter correctly notes (p. 512) that big data is “a field with more questions than answers”.  Thus, logically, the chapter is mostly descriptive.  It includes a good account of two-sided markets (p. 519), and correctly notes (p. 521) that “data governance should not be construed merely as an economic matter but that it should also encompass a social perspective”, a position with which I wholeheartedly agree.  As the chapter says (p. 522), “there are some areas affected by big data where public policies and regulations do exist”, in particular regarding:

    • Privacy
    • Data ownership
    • Open data

    As the chapter says (p. 522), most evidence available today suggests that markets are not “responding rapidly to concerns of users about the (mis)use of their personal information”.  For additional discussion, with extensive references, see section 1 of my submission to the Working Group on Enhanced Cooperation.

    1. The Evolution of the Internet: A Socioeconomic Account

    This is a very weak chapter.  Its opening paragraph fails to consider the historical context of the development of the Internet, or its consequences.  Its second paragraph fails to consider the overt influence of the US government on the evolution of the Internet.  Section 26.3 fails to cite one of the most comprehensive works on the topic (the relation between AT&T and the development of the internet), namely Schiller, Dan (2014) Digital Depression: Information Technology and Information Crisis, University of Illinois Press.  The discussion on p. 536 fails to even mention the Open Systems Interconnection (OSI) initiative, yet that initiative undoubtedly affected the development of the Internet, not just by providing a model for how not to do things (too complex, too slow), but also by providing some basic technology that is still used to this day, such as X.509 certificates.

    Section 26.6, on how market forces affect the Internet, seems oblivious to the rising evidence that dominant market power, not competition, is shaping the future of the Internet, which appears surprising in light of the good chapter in the book on that very topic: “Competition and anti-trust in Internet markets.”  Page 547 appears to ignore the rising vertical integration of many Internet services, even though that trend is well discussed in Noam’s excellent chapter “From the Internet of Science to the Internet of Entertainment.”

    The discussion of the role of government on p. 548 is surprisingly lacunary, given the rich literature on the topic in general, and specific government actions or proposed actions regarding topics such as freedom of speech, privacy, data protection, encryption, security, etc. (see for example my submission to the Working Group on Enhanced Cooperation).

    This chapter should have started with the observation that the Internet was not conceived as a public network (p. 558) and build on that observation, explaining the socioeconomic factors that shaped its transformation from a closed military/academic network into a public network and into a basic infrastructure that now underpins most economic activities.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb – Shackles of Digital Freedom (Review of Qiu, Goodbye iSlave)

    Zachary Loeb – Shackles of Digital Freedom (Review of Qiu, Goodbye iSlave)

    a review of Jack Linchuan Qiu, Goodbye iSlave: a Manifesto for Digital Abolition (Illinois, 2016)

    by Zachary Loeb

    ~

    With bright pink hair and a rainbow horn, the disembodied head of a unicorn bobs back and forth to the opening beats of Big Boi’s “All Night.” Moments later, a pile of poop appears and mouths the song’s opening words, and various animated animal heads appear nodding along in sequence. Soon the unicorn returns, lip-synching the song, and it is quickly joined by a woman whose movements, facial expressions, and exaggerated enunciations sync with those of the unicorn. As a pig, a robot, a chicken, and a cat appear to sing in turn it becomes clear that the singing emojis are actually mimicking the woman – the cat blinks when she blinks, it raises its brow when she does. The ad ends by encouraging users to “Animoji” themselves, something which is evidently doable with Apple’s iPhone X. It is a silly ad, with a catchy song, and unsurprisingly it tells the viewer nothing about where, how, or by whom the iPhone X was made. The ad may playfully feature the ever-popular “pile of poop” emoji, but the ad is not intended to make potential purchasers feel like excrement.

    And yet there is much more to the iPhone X’s history than the words on the device’s back “Designed by Apple in California. Assembled in China.” In Goodbye iSlave: a Manifesto for Digital Abolition, Jack Linchuan Qiu removes the phone’s shiny case to explore what “assembled in China” really means. As Qiu demonstrates in discomforting detail this is a story that involves exploitative labor practices, enforced overtime, abusive managers, insufficient living quarters, and wage theft, in a system that he argues is similar to slavery.

    illustration
    First published by Greenpeace Switzerland

    Launched by activists in 2010, the “iSlave” campaign aimed to raise awareness about the labor conditions that had led to a wave of suicides amongst Foxconn workers; those performing the labor summed up neatly as “assembled in China.” Seizing upon the campaign’s key term, Qiu aims to expand it “figuratively and literally” to demonstrate that “iSlavery” is “a planetary system of domination, exploitation, and alienation…epitomized by the material and immaterial structures of capital accumulation” (9). This in turn underscores the “world system of gadgets” that Qiu refers to as “Appconn” (13); a system that encompasses those who “designed” the devices, those who “assembled” them, as well as those who use them. In engaging with the terminology of slavery, Qiu is consciously laying out a provocative argument, but it is a provocation that acknowledges that as smartphones have become commonplace many consumers have become inured to the injustices that allow them to “animoji” themselves. Indeed, it is a reminder that, “Technology does not guarantee progress. It is, instead, often abused to cause regress” (8).

    Surveying its history, Qiu notes that slavery has appeared in a variety of forms in many regions throughout history. Though he emphasizes that even today slavery “persists in its classic forms” (21), his focus remains on theoretically expanding the term. Qiu draws upon the League of Nation’s “1926 Slavery Convention” which still acts as the foundation for much contemporary legal thinking on slavery, including the 2012 Bellagio-Harvard Guidelines on the Legal Parameters of Slavery (which Qiu includes in his book as an appendix). These legal guidelines expand the definition of what constitutes slavery to include “institutions and practices similar to slavery” (42). The key element for this updated definition is an understanding that it is no longer legal for a person to be “formally and legally ‘owned’ in any jurisdiction” and thus the concept of slavery requires rethinking (45). In considering which elements from the history of slavery are particularly relevant for the story of “iSlavery,” Qiu emphasizes: how the slave trade made use of advanced technologies of its time (guns, magnetic compasses, slave ships); how the slave trade was linked to creating and satisfying consumer desires (sugar); and how the narrative of resistance and revolt is a key aspect of the history of slavery. For Qiu,  “iSlavery” is manifested in two forms: “manufacturing iSlaves” and “manufactured iSlaves.”

    In the process of creating high-tech gadgets there are many types of “manufacturing iSlaves,” in conditions similar to slavery “in its classic forms” including “Congolese mine workers” and “Indonesian child labor,” but Qiu focuses primarily on those working for Foxconn in China. Drawing upon news reports, NGO findings, interviews with former workers, underground publications produced by factor workers, and from his experiences visiting these assembly plants, Qiu investigates many ways in which “institutions and practices similar to slavery” shape the lives of Foxconn workers. Insufficient living conditions, low wages that are often not even paid, forced overtime, “student interns” being used as an even cheaper labor force, violently abusive security guards, the arrangement of life so as to maximize disorientation and alienation – these represent some of the common experiences of Foxconn workers. Foxconn found itself uncomfortably in the news in 2010 due to a string of worker suicides, and Qiu sympathetically portrays the conditions that gave rise to such acts, particularly in his interview with Tian Yu who survived her suicide attempt.

    As Qiu makes clear, Foxconn workers often have great difficulty leaving the factories, but what exits these factories at a considerable rate are mountains of gadgets that go on to be eagerly purchased and used by the “manufactured iSlaves.” The transition to the “manufactured iSlave” entails “a conceptual leap” (91) that moves away from the “practices similar to slavery” that define the “manufacturing iSlave” to instead signify “those who are constantly attached to their gadgets” (91). Here the compulsion takes on the form of a vicious consumerism that has resulted in an “addiction” to these gadgets, and a sense in which these gadgets have come to govern the lives of their users. Drawing upon the work of Judy Wajcman, Qiu notes that “manufactured iSlaves” (Qiu’s term) live under the aegis of “iTime” (Wajcman’s term), a world of “consumerist enslavement” into which they’ve been drawn by “Net Slaves” (Steve Baldwin and Bill Lessard’s term of “accusation and ridicule” for those whose jobs fit under the heading “Designed in California”). While some companies have made fortunes off the material labor of “manufacturing iSlaves,” Qiu emphasizes that many companies that have made their fortunes off the immaterial labor of legions of “manufactured iSlaves” dutifully clicking “like,” uploading photos, and hitting “tweet” all without any expectation that they will be paid for their labor. Indeed, in Qiu’s analysis, what keeps many “manufactured iSlaves” unaware of their shackles is that they don’t see what they are doing on their devices as labor.

    In his description of the history of slavery, Qiu emphasizes resistance, both in terms of acts of rebellion by enslaved peoples, and the broader abolition movement. This informs Qiu’s commentary on pushing back against the system of Appconn. While smartphones may be cast as the symbol of the exploitation of Foxconn workers, Qiu also notes that these devices allow for acts of resistance by these same workers “whose voices are increasingly heard online” (133). Foxconn factories may take great pains to remain closed off from prying eyes, but workers armed with smartphones are “breaching the lines of information lockdown” (148). Campaigns by national and international NGOs can also be important in raising awareness of the plight of Foxconn workers, after all the term “iSlave” was originally coined as part of such a campaign. In bringing awareness of the “manufacturing iSlave” to the “manufactured iSlave” Qiu points to “culture jamming” responses such as the “Phone Story” game which allows people to “play” through their phones vainglorious tale (ironically the game was banned from Apple’s app store). Qiu also points to the attempt to create ethical gadgets, such as the Fairphone which aims to responsibly source its minerals, pay those who assemble their phones a living wage, and push back against the drive of planned obsolescence. As Qiu makes clear, there are many working to fight against the oppression built into Appconn.

    “For too long,” Qiu notes, “the underbellies of the digital industries have been obscured and tucked away; too often, new media is assumed to represent modernity, and modernity assumed to represent freedom” (172). Qiu highlights the coercion and misery that are lurking below the surface of every silly cat picture uploaded on Instagram, and he questions whether the person doing the picture taking and uploading is also being exploited. A tough and confrontational book, Goodbye iSlave nevertheless maintains hope for meaningful resistance.

    Anyone who has used a smartphone, tablet, laptop computer, e-reader, video game console, or smart speaker would do well to read Goodbye iSlave. In tight effective prose, Qiu presents a gripping portrait of the lives of Foxconn workers and this description is made more confrontational by the uncompromising language Qiu deploys. And though Qiu begins his book by noting that “the outlook of manufacturing and manufactured iSlaves is rather bleak” (18), his focus on resistance gives his book the feeling of an activist manifesto as opposed to the bleak tonality of a woebegone dirge. By engaging with the exploitation of material labor and immaterial labor, Qiu is, furthermore, able to uncomfortably remind his readers not only that their digital freedom comes at a human cost, but that digital freedom may itself be a sort of shackle.

    In the book’s concluding chapter, Qiu notes that he is “fully aware that slavery is a very severe critique” (172), and this represents one of the greatest challenges the book poses. Namely: what to make of Qiu’s use of the term slavery? As Qiu demonstrates, it is not a term that he arrived at simply for shock value, nevertheless, “slavery” is itself a complicated concept. Slavery carries a history of horrors that make one hesitant to deploy it in a simplistic fashion even as it remains a basic term of international law. By couching his discussion of “iSlavery” both in terms of history and contemporary legal thinking, Qiu demonstrates a breadth of sensitivity and understanding regarding its nuances. And given the focus of current laws on “institutions and practices similar to slavery” (42) it is hard to dispute that this is a fair description of many of the conditions to which Foxconn workers are subjected – even as Qiu’s comments on coltan miners demonstrates other forms of slavery that lurk behind the shining screens of high-tech society.

    Nevertheless, there is frequently something about the use of the term “iSlavery” that seems to diminish the heft of Qiu’s argument. As the term often serves as a stumbling block that pulls a reader away from Qiu’s account; particularly when he tries to make the comparisons too direct such as juxtaposing Foxconn’s (admittedly wretched) dormitories to conditions on slave ships crossing the Atlantic. It’s difficult not to find the comparison hyperbolic. Similarly, Qiu notes that ethnic and regional divisions are often found within Foxconn factories; but these do not truly seem comparable to the racist views that undergirded (and was used to justify) the Atlantic slave trade. Unfortunately, this is a problem that Qiu sets for himself: had he only used “slave” in a theoretical sense it would have opened him to charges of historical insensitivity, but by engaging with the history of slavery many of Qiu’s comparisons seem to miss the mark – and this is exacerbated by the fact that he repeatedly refers to ongoing conditions of “classic” slavery involved in the making of gadgets (such as coltan mining). Qiu provides an important and compelling window into the current legal framing of slavery, and yet, something about the “iSlave” prevents it from fitting into the history of slavery. It is, unfortunately, too easy to imagine someone countering Qiu’s arguments by saying “but this isn’t really slavery” to which the retort of “current law defines slavery as…” will be unlikely to convince.

    The matter of “slavery” only gets thornier as Qiu shifts his attention from “manufacturing iSlaves” to “manufactured iSlaves.” In recent years there has been a wealth of writing in the academic and popular sphere that critically asks what our gadgets are doing to us, such as Sherry Turkle’s Alone Together and Judy Wacjman’s Pressed for Time (which Qiu cites). And the fear that technology turns people into “cogs” is hardly new: in his 1956 book The Sane Society, Erich Fromm warned “the danger of the past was that men became slaves. The danger of the future is that men may become robots” (Fromm, 352). Fromm’s anxiety is what one more commonly encounters in discussions about what gadgets turn their users into, but these “robots” are not identical with “slaves.” When Qiu discusses “manufactured iSlaves” he notes that it represents a “conceptual leap,” but by continuing to use the term “slave” this “conceptual leap” unfortunately hampers his broader points about Foxconn workers. The danger is that a sort of false equivalency risks being created in which smartphone users shrug off their complicity in the exploitation of assembly workers by saying, “hey, I’m exploited too.”

    Some of this challenge may ultimately simply be about word choice. The very term “iSlave,” despite its activist origins, seems somewhat silly through its linkage to all things to which a lowercase “i” has been affixed. Furthermore, the use of the “i” risks placing all of the focus on Apple. True, Apple products are manufactured in the exploitative Foxconn factories, and Qiu may be on to something in referring to the “Apple cult,” but as Qiu himself notes Foxconn manufactures products for a variety of companies. Just because a device isn’t an “i” gadget, doesn’t mean that it wasn’t manufactured by an “iSlave.” And while Appconn is a nice shorthand for the world that is built upon the backs of both kinds of “iSlaves” it risks being just another opaque neologism for computer dominated society that is undercut by the need for it to be defined.

    Given the grim focus of Qiu’s book, it is understandable why he should choose to emphasize rebellion and resistance, and these do allow readers to put down the book feeling energized. Yet some of these modes of resistance seem to risk more entanglement than escape. There is a risk that the argument that Foxconn workers can use smartphones to organize simply fits neatly back into the narrative that there is something “inherently liberating” about these devices. The “Phone Story” game may be a good teaching tool, but it seems to make a similar claim on the democratizing potential of the Internet. And while the Fairphone represents, perhaps, one of the more significant ways to get away from subsidizing Appconn it risks being just an alternative for concerned consumers not a legally mandated industry standard. At risk of an unfair comparison, a Fairphone seems like the technological equivalent of free range eggs purchased at the farmer’s market – it may genuinely be ethically preferable, but it risks reducing a major problem (iSlavery) into yet another site for consumerism (just buy the right phone). In fairness, these are the challenges inherent in critiquing the dominant order; as Theodor Adorno once put it “we live on the culture we criticize” (Adorno and Horkheimer, 105). It might be tempting to wish that Qiu had written an Appconn version of Jerry Mander’s Four Arguments for the Elimination of Television, but Qiu seems to recognize that simply telling people to turn it all off is probably just as efficacious as telling them not to do anything at all. After all, Mander’s “four arguments” may have convinced a few people – but not society as a whole. So, what then does “digital abolition” really mean?

    In describing Goodbye iSlave, Qiu notes that it is “nothing more than an invitation—for everyone to reflect on the enslaving tendencies of Appconn and the world system of gadgets” it is an opportunity for people to reflect on the ways in which “so many myths of liberation have been bundled with technological buzzwords, and they are often taken for granted” (173). It is a challenging book and an important one, and insofar as it forces readers to wrestle with Qiu’s choice of terminology it succeeds by making them seriously confront the regimes of material and immaterial labor that structure their lives. While the use of the term “slavery” may at times hamper Qiu’s larger argument, this unflinching look at the labor behind today’s gadgets should not be overlooked.

    Goodbye iSlave frames itself as “a manifesto for digital abolition,” but what it makes clear is that this struggle ultimately isn’t about “i” but about “us.”

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. 2011. Towards a New Manifesto. London: Verso Books.
    • Fromm, Erich. 2002. The Sane Society. London: Routledge.
  • Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    Richard Hill — Knots of Statelike Power (Review of Harcourt, Exposed: Desire and Disobedience in the Digital Age)

    a review of Bernard Harcourt, Exposed: Desire and Disobedience in the Digital Age (Harvard, 2015)

    by Richard Hill

    ~

    This is a seminal and important book, which should be studied carefully by anyone interested in the evolution of society in light of the pervasive impact of the Internet. In a nutshell, the book documents how and why the Internet turned from a means to improve our lives into what appears to be a frightening dystopia driven by the collection and exploitation of personal data, data that most of us willingly hand over with little or no care for the consequences. “In our digital frenzy to share snapshots and updates, to text and videochat with friends and lovers … we are exposing ourselves‒rendering ourselves virtually transparent to anyone with rudimentary technological capabilities” (page 13 of the hardcover edition).

    The book meets its goals (25) of tracing the emergence of a new architecture of power relations; to document its effects on our lives; and to explore how to resist and disobey (but this last rather succinctly). As the author correctly says (28), metaphors matter, and we need to re-examine them closely, in particular the so-called free flow of data.

    As the author cogently points out, quoting Media Studies scholar Siva Vaidhyanathan, we “assumed digitization would level the commercial playing field in wealthy economies and invite new competition into markets that had always had high barriers to entry.” We “imagined a rapid spread of education and critical thinking once we surmounted the millennium-old problems of information scarcity and maldistribution” (169).

    “But the digital realm does not so much give us access to truth as it constitutes a new way for power to circulate throughout society” (22). “In our digital age, social media companies engage in surveillance, data brokers sell personal information, tech companies govern our expression of political views, and intelligence agencies free-ride off e-commerce. … corporations and governments [are enabled] to identify and cajole, to stimulate our consumption and shape our desires, to manipulate us politically, to watch, surveil, detect, predict, and, for some, punish. In the process, the traditional limits placed on the state and on governing are being eviscerated, as we turn more and more into marketized malleable subjects who, willingly or unwillingly, allow ourselves to be nudged, recommended, tracked, diagnosed, and predicted by a blurred amalgam of governmental and commercial initiative” (187).

    “The collapse of the classic divide between the state and society, between the public and private sphere, is particular debilitating and disarming. The reason is that the boundaries of the state had always been imagined in order to limit them” (208). “What is emerging in the place of separate spheres [of government and private industry] is a single behemoth of a data market: a colossal market for personal data” (198). “Knots of statelike power: that is what we face. A tenticular amalgam of public and private institutions … Economy, society, and private life melt into a giant data market for everyone to trade, mine, analyze, and target” (215). “This is all the more troubling because the combinations we face today are so powerful” (210).

    As a consequence, “Digital exposure is restructuring the self … The new digital age … is having profound effects on our analogue selves. … it is radically transforming our subjectivity‒even for those, perhaps even more, who believe they have nothing to fear” (232). “Mortification of the self, in our digital world, happens when subjects voluntarily cede their private attachments and their personal privacy, when they give up their protected personal space, cease monitoring their exposure on the Internet, let go of their personal data, and expose their intimate lives” (233).

    As the book points out, quoting Software Freedom Law Center founder Eben Moglen, it is justifiable to ask whether “any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the United States government has led not only its people but the world” (254). “This is a different form of despotism, one that might take hold only in a democracy: one in which people loose the will to resist and surrender with broken spirit” (255).

    The book opens with an unnumbered chapter that masterfully reminds us of the digital society we live in: a world in which both private companies and government intelligence services (also known as spies) read our e-mails and monitor our web browsing. Just think of “the telltale advertisements popping up on the ribbon of our search screen, reminding us of immediately past Google or Bing queries. We’ve received the betraying e-mails in our spam folders” (2). As the book says, quoting journalist Yasha Levine, social media has become “a massive surveillance operation that intercepts and analyses terabytes of data to build and update complex psychological profiles on hundreds of millions of people all over the world‒all of it in real time” (7). “At practically no cost, the government has complete access to people’s digital selves” (10).

    We provide all this data willingly (13), because we have no choice and/or because we “wish to share our lives with loved ones and friends” (14). We crave digital connections and recognition and “Our digital cravings are matched only by the drive and ambition of those who are watching” (14). “Today, the drive to know everything, everywhere, at every moment is breathtaking” (15).

    But “there remain a number of us who continue to resist. And there are many more who are ambivalent about the loss of privacy or anonymity, who are deeply concerned or hesitant. There are some who anxiously warn us about the dangers and encourage us to maintain reserve” (13).

    “And yet, even when we hesitate or are ambivalent, it seems there is simply no other way to get things done in the new digital age” (14), be it airline tickets, hotel reservations, buying goods, booking entertainment. “We make ourselves virtually transparent for everyone to see, and in so doing, we allow ourselves to be shaped in unprecedented ways, intentionally or wittingly … we are transformed and shaped into digital subjects” (14). “It’s not so much a question of choice as a feeling of necessity” (19). “For adolescents and young adults especially, it is practically impossible to have a social life, to have friends, to meet up, to go on dates, unless we are negotiating the various forms of social media and mobile technology” (18).

    Most have become dulled by blind faith in markets, the neoliberal mantra (better to let private companies run things than the government), fear of terrorism‒dulled into believing that, if we have nothing to hide, then there is nothing to fear (19). Even though private companies, and governments, know far more about us than a totalitarian regime such as that of East Germany “could ever have dreamed” (20).

    “We face today, in advanced liberal democracies, a radical new form of power in a completely altered landscape of political and social possibilities” (17). “Those who govern, advertise, and police are dealing with a primary resource‒personal data‒that is being handed out for free, given away in abundance, for nothing” (18).

    According to the book “There is no conspiracy here, nothing untoward.” But the author probably did not have access to Shawn M. Powers and Michael Jablonski’s The Real Cyberwar: The Political Economy of Internet Freedom (2015), published around the same time as Harcourt’s book, which shows that actually the current situation was created, or at least facilitated, by deliberate actions of the US government (which were open, not secret), resulting in what the book calls, quoting journalist James Bamford, “a surveillance-industrial empire” (27).

    The observations and conclusions outlined above are meticulously justified, with numerous references, in the numbered chapters of the book. Chapter 1 explains how analogies of the current surveillance regime to Orwell’s 1984 are imperfect because, unlike in Orwell’s imagined world, today most people desire to provide their personal data and do so voluntarily (35). “That is primarily how surveillance works today in liberal democracies: through the simplest desires, curated and recommended to us” (47).

    Chapter 2 explains how the current regime is not really a surveillance state in the classical sense of the term: it is a surveillance society because it is based on the collaboration of government, the private sector, and people themselves (65, 78-79). Some believe that government surveillance can prevent or reduce terrorist attacks (55-56), never mind that it might violate constitutional rights (56-57), or be ineffective, or that terrorist attacks in liberal democracies have resulted in far fewer fatalities than, say, traffic accidents or opiod overdose.

    Chapter 3 explains how the current regime is not actually an instantiation of Jeremy Bentham’s Panopticon, because we are not surveilled in order to be punished‒on the contrary, we expose ourselves in order to obtain something we want (90), and we don’t necessarily realize the extent to which we are being surveilled (91). As the book puts it, Google strives “to help people get what they want” by collecting and processing as much personal data as possible (103).

    Chapter 4 explains how narcissism drives the willing exposure of personal data (111). “We take pleasure in watching [our friends], ‘following’ them, ‘sharing’ their information‒even while we are, unwittingly, sharing our every keyboard stroke” (114). “We love watching others and stalking their digital traces” (117).

    Yet opacity is the rule for corporations‒as the book says, quoting Frank Pasquale (124-125), “Internet companies collect more and more data on their users but fight regulations that would let those same users exercise some control over the resulting digital dossiers.” In this context, it is worth noting the recent proposals, analyzed here, here, and here, to the World Trade Organization that would go in the direction favored by dominant corporations.

    The book explains in summary fashion the importance of big data (137-140). For an additional discussion, with extensive references, see sections 1 of my submission to the Working Group on Enhanced Cooperation. As the book correctly notes, “In the nineteenth century, it was the government that generated data … But now we have all become our own publicists. The production of data has become democratized” (140).

    Chapter 5 explains how big data, and its analysis, is fundamentally different from the statistics that were collected, analyzed, and published in the past by governments. The goal of statistics is to understand and possibly predict the behavior of some group of people who share some characteristics (e.g. they live in a particular geographical area, or are of the same age). The goal of big data is to target and predict individuals (158, 161-163).

    Chapter 6 explains how we have come to accept the loss of privacy and control of our personal data (166-167). A change in outlook, largely driven by an exaggerated faith in free enterprise (168 and 176), “has made it easier to commodify privacy, and, gradually, to eviscerate it” (170). “Privacy has become a form of private property” (176).

    The book documents well the changes in the US Supreme Court’s views of privacy, which have moved from defending a human right to balancing privacy with national security and commercial interests (172-175). Curiously, the book does not mention the watershed Smith vs. Maryland case, in which the US Supreme Court held that telephone metadata is not protected by the right to privacy, nor the US Electronic Communications Privacy Act, under which many e-mails are not protected either.

    The book mentions the incestuous ties between the intelligence community, telecommunications companies, multinational companies, and military leadership that have facilitated the implementation of the current surveillance regime (178); these ties are exposed and explained in greater detail in Powers and Jablonski’s The Real Cyberwar. This chapter ends with an excellent explanation of how digital surveillance records are in no way comparable to the old-fashioned paper files that were collected in the past (181).

    Chapter 7 explores the emerging dystopia, engendered by the fact that “The digital economy has torn down the conventional boundaries between governing, commerce, and private life” (187). In a trend that should be frightening, private companies now exercise censorship (191), practice data mining on scales that are hard to imagine (194), control worker performance by means beyond the dreams of any Tayorlist (196), and even aspire to “predict consumer preferences better than consumers themselves can” (198).

    The size of the data brokerage market is huge and data on individuals is increasingly used to make decision about them, e.g. whether they can obtain a loan (198-208). “Practically none of these scores [calculated from personal data] are revealed to us, and their accuracy is often haphazard” (205). As noted above, we face an interdependent web of private and public interests that collect, analyze, refine, and exploit our personal data‒without any meaningful supervision or regulation.

    Chapter 8 explains how digital interactions are reconfiguring our self-images, our subjectivity. We know, albeit at times only implicitly, that we are being surveilled and this likely affects the behavior of many (218). Being deprived of privacy affects us, much as would being deprived of property (229). We have voluntarily given up much of our privacy, believing either that we have no choice but to accept surveillance, or that the surveillance is in our interests (233). So it is our society as a whole that has created, and nurtures, the surveillance regime that we live in.

    As shown in Chapter 9, that regime is a form of digital incarceration. We are surveilled even more closely than are people obliged by court order to wear electronic tracking devices (237). Perhaps a future smart watch will even administer sedatives (or whatever) when it detects, by analyzing our body functions and comparing with profiles downloaded from the cloud, that we would be better off being sedated (237). Or perhaps such a watch will be hijacked by malware controlled by an intelligence service or by criminals, thus turning a seemingly free choice into involuntary constraints (243, 247).

    Chapter 10 show in detail how, as already noted, the current surveillance regime is not compatible with democracy. The book cites Tocqueville to remind us that democracy can become despotic, and result is a situation where “people lose the will to resist and surrender with broken spirit” (255). The book summarily presents well-known data regarding the low voter turnouts in the United States, a topic covered in full detail in Robert McChesney’s  Digital Disconnect: How Capitalism is Turning the Internet Against Democracy (2014) which explains how the Internet is having a negative effect on democracy. Yet “it remains the case that the digital transparency and punishment issues are largely invisible to democratic theory and practice” (216).

    So, what is to be done? Chapter 11 extols the revelations made by Edward Snowden and those published by Julian Assange (WikiLeaks). It mentions various useful self-help tools, such as “I Fight Surveillance” and “Security in a Box” (270-271). While those tools are useful, they are not at present used pervasively and thus don’t really affect the current surveillance regime. We need more emphasis on making the tools available and on convincing more people to use them.

    As the book correctly says, an effective measure would be to carry the privatization model to its logical extreme (274): since personal data is valuable, those who use it should pay us for it. As already noted, the industry that is thriving from the exploitation of our personal data is well aware of this potential threat, and has worked hard to attempt to obtain binding international norms, in the World Trade Organization, that would enshrine the “free flow of data”, where “free” in the sense of freedom of information is used as a Trojan Horse for the real objective, which is “free” in the sense of no cost and no compensation for those the true owners of the data, we the people. As the book correctly mentions, civil society organizations have resisted this trend and made proposals that go in the opposite direction (276), including a proposal to enshrine the necessary and proportionate principles in international law.

    Chapter 12 concludes the book by pointing out, albeit very succinctly, that mass resistance is necessary, and that it need not be organized in traditional ways: it can be leaderless, diffuse, and pervasive (281). In this context, I refer to the work of the JustNet Coalition and of the fledgling Internet Social Forum (see also here and here).

    Again, this book is essential reading for anybody who is concerned about the current state of the digital world, and the direction in which it is moving.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    Richard Hill — States, Governance, and Internet Fragmentation (Review of Mueller, Will the Internet Fragment?)

    a review of Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)

    by Richard Hill

    ~

    Like other books by Milton Mueller, Will the Internet Fragment? is a must-read for anybody who is seriously interested in the development of Internet governance and its likely effects on other walks of life.  This is true because, and not despite, the fact that it is a tract that does not present an unbiased view. On the contrary, it advocates a certain approach, namely a utopian form of governance which Mueller refers to as “popular sovereignty in cyberspace”.

    Mueller, Professor of Information Security and Privacy at Georgia Tech, is an internationally prominent scholar specializing in the political economy of information and communication.  The author of seven books and scores of journal articles, his work informs not only public policy but also science and technology studies, law, economics, communications, and international studies.  His books Networks and States: The Global Politics of Internet Governance (MIT Press, 2010) and Ruling the Root: Internet Governance and the Taming of Cyberspace (MIT Press, 2002) are acclaimed scholarly accounts of the global governance regime emerging around the Internet.

    Most of Will the Internet Fragment? consists of a rigorous analysis of what has been commonly referred to as “fragmentation,” showing that very different technological and legal phenomena have been conflated in ways that do not favour productive discussions.  That so-called “fragmentation” is usually defined as the contrary of the desired situation in which “every device on the Internet should be able to exchange data packets with any other device that is was willing to receive them” (p. 6 of the book, citing Vint Cerf).  But. as Mueller correctly points out, not all end-points of the Internet can reach all other end-points at all times, and there may be very good reasons for that (e.g. corporate firewalls, temporary network outages, etc.).  Mueller then shows how network effects (the fact that the usefulness of a network increases as it becomes larger) will tend to prevent or counter fragmentation: a subset of the network is less useful than is the whole.  He also shows how network effects can prevent the creation of alternative networks: once everybody is using a given network, why switch to an alternative that few are using?  As Mueller aptly points out (pp. 63-66), the slowness of the transition to IPv6 is due to this type of network effect.

    The key contribution of this book is that it clearly identifies the real question of interest to whose who are concerned about the governance of the Internet and its impact on much of our lives.  That question (which might have been a better subtitle) is: “to what extent, if any, should Internet policies be aligned with national borders?”  (See in particular pp. 71, 73, 107, 126 and 145).  Mueller’s answer is basically “as little as possible, because supra-national governance by the Internet community is preferable”.  This answer is presumably motivated by Mueller’s view that “ institutions shift power from states to society” (p. 116), which implies that “society” has little power in modern states.  But (at least ideally) states should be the expression of a society (as Mueller acknowledges on pp. 124 and 136), so it would have been helpful if Mueller had elaborated on the ways (and there are many) in which he believes states do not reflect society and in the ways in which so-called multi-stakeholder models would not be worse and would not result in a denial of democracy.

    Before commenting on Mueller’s proposal for supra-national governance, it is worth commenting on some areas where a more extensive discussion would have been warranted.  We note, however, that the book the book is part of a series that is deliberately intended to be short and accessible to a lay public.  So Mueller had a 30,000 word limit and tried to keep things written in a way that non-specialists and non-scholars could access.  This no doubt largely explains why he didn’t cover certain topics in more depth.

    Be that as it may, the discussion would have been improved by being placed in the long-term context of the steady decrease in national sovereignty that started in 1648, when sovereigns agreed in the Treaty of Westphalia to refrain from interfering in the religious affairs of foreign states, , and that accelerated in the 20th century.  And by being placed in the short-term context of the dominance by the USA as a state (which Mueller acknowledges in passing on p. 12), and US companies, of key aspects of the Internet and its governance.  Mueller is deeply aware of the issues and has discussed them in his other books, in particular Ruling the Root and Networks and States, so it would have been nice to see the topic treated here, with references to the end of the Cold War and what appears to be re-emergence of some sort of equivalent international tension (albeit not for the same reasons and with different effects at least for what concerns cyberspace).  It would also have been preferable to include at least some mention of the literature on the negative economic and social effects of current Internet governance arrangements.

     Will the Internet Fragment? Sovereignty, Globalization and Cyberspace (Polity, 2017)It is telling that, in Will the Internet Fragment?, Mueller starts his account with the 2014 NetMundial event, without mentioning that it took place in the context of the outcomes of the World Summit of the Information Society (WSIS, whose genesis, dynamics, and outcomes Mueller well analyzed in Networks and States), and without mentioning that the outcome document of the 2015 UN WSIS+10 Review reaffirmed the WSIS outcomes and merely noted that Brazil had organized NetMundial, which was, in context, an explicit refusal to note (much less to endorse) the NetMundial outcome document.

    The UN’s reaffirmation of the WSIS outcomes is significant because, as Mueller correctly notes, the real question that underpins all current discussions of Internet governance is “what is the role of states?,” and the Tunis Agenda states: “Policy authority for Internet-related public policy issues is the sovereign right of States. They have rights and responsibilities for international Internet-related public policy issues.”

    Mueller correctly identifies and discusses the positive externalities created by the Internet (pp. 44-48).  It would have been better if he had noted that there are also negative externalities, in particular regarding security (see section 2.8 of my June 2017 submission to ITU’s CWG-Internet), and that the role of states includes internalizing such externalities, as well as preventing anti-competitive behavior.

    It is also telling the Mueller never explicitly mentions a principle that is no longer seriously disputed, and that was explicitly enunciated in the formal outcome of the WSIS+10 Review, namely that offline law applies equally online.  Mueller does mention some issues related to jurisdiction, but he does not place those in the context of the fundamental principle that cyberspace is subject to the same laws as the rest of the world: as Mueller himself acknowledges (p. 145), allegations of cybercrime are judged by regular courts, not cyber-courts, and if you are convicted you will pay a real fine or be sent to a real prison, not to a cyber-prison.  But national jurisdiction is not just about security (p. 74 ff.), it is also about legal certainty for commercial dealings, such as enforcement of contracts.  There are an increasing number of activities that depend on the Internet, but that also depend on the existence of known legal regimes that can be enforced in national courts.

    And what about the tension between globalization and other values such as solidarity and cultural diversity?  As Mueller correctly notes (p. 10), the Internet is globalization on steroids.  Yet cultural values differ around the world (p. 125).  How can we get the benefits of both an unfragmented Internet and local cultural diversity (as opposed to the current trend to impose US values on the rest of the world)?

    While dealing with these issues in more depth would have complicated the discussion, it also would have made it more valuable, because the call for direct rule of the Internet by and for Internet users must either be reconciled with the principle that offline law applies equally online, or be combined with a reasoned argument for the abandonment of that principle.  As Mueller so aptly puts it (p. 11): “Internet governance is hard … also because of the mismatch between its global scope and the political and legal institutions for responding to societal problems.”

    Since most laws, and almost all enforcement mechanisms are national, the influence of states on the Internet is inevitable.  Recall that the idea of enforceable rules (laws) dates back to at least 1700 BC and has formed an essential part of all civilizations in history.  Mueller correctly posits on p. 125 that a justification for territorial sovereignty is to restrict violence (only the state can legitimately exercise it), and wonders why, in that case, the entire world does not have a single government.  But he fails to note that, historically, at times much of the world was subject to a single government (think of the Roman Empire, the Mongol Empire, the Holy Roman Empire, the British Empire), and he does not explore the possibility of expanding the existing international order (treaties, UN agencies, etc.) to become a legitimate democratic world governance (which of course it is not, in part because the US does not want it to become one).  For example, a concrete step in the direction of using existing governance systems has recently been proposed by Microsoft: a Digital Geneva Convention.

    Mueller explains why national borders interfere with certain aspects of certain Internet activities (pp. 104, 106), but national borders interfere with many activities.  Yet we accept them because there doesn’t appear to be any “least worst” alternative.  Mueller does acknowledge that states have power, and rightly calls for states to limit their exercise of power to their own jurisdiction (p. 148).  But he posits that such power “carries much less weight than one would think” (p. 150), without justifying that far-reaching statement.  Indeed, Mueller admits that “it is difficult to conceive of an alternative” (p. 73), but does not delve into the details sufficiently to show convincingly how the solution that he sketches would not result in greater power by dominant private companies (and even corpotocracy or corporatism), increasing income inequality, and a denial of democracy.  For example, without the power of state in the form of consumer protection measures, how can one ensure that private intermediaries would “moderate content based on user preferences and reports” (p. 147) as opposed to moderating content so as to maximize their profits?  Mueller assumes that there would be a sufficient level of competition, resulting in self-correcting forces and accountability (p. 129); but current trends are just the opposite: we see increasing concentration and domination in many aspects of the Internet (see section 2.11 of my June 2017 submission to ITU’s CWG-Internet) and some competition law authorities have found that some abuse of dominance has taken place.

    It seems to me that Mueller too easily concludes that “a state-centric approach to global governance cannot easily co-exist with a multistakeholder regime” (p. 117), without first exploring the nuances of multi-stakeholder regimes and the ways that they could interface with existing institutions, which include intergovernmental bodies as well as states.  As I have stated elsewhere: “The current arrangement for global governance is arguably similar to that of feudal Europe, whereby multiple arrangements of decision-making, including the Church, cities ruled by merchant-citizens, kingdoms, empires and guilds co-existed with little agreement as to which actor was actually in charge over a given territory or subject matter.  It was in this tangled system that the nation-state system gained legitimacy precisely because it offered a clear hierarchy of authority for addressing issues of the commons and provision of public goods.”

    Which brings us to another key point that Mueller does not consider in any depth: if the Internet is a global public good, then its governance must take into account the views and needs of all the world’s citizens, not just those that are privileged enough to have access at present.  But Mueller’s solution would restrict policy-making to those who are willing and able to participate in various so-called multi-stakeholder forums (apparently Mueller does not envisage a vast increase in participation and representation in these; p. 120).  Apart from the fact that that group is not a community in any real sense (a point acknowledged on p. 139), it comprises, at present, only about half of humanity, and even much of that half would not be able to participate because discussions take place primarily in English, and require significant technical knowledge and significant time commitments.

    Mueller’s path for the future appears to me to be a modern version of the International Ad Hoc Committee (IAHC), but Mueller would probably disagree, since he is of the view that the IAHC was driven by intergovernmental organizations.  In any case, the IAHC work failed to be seminal because of the unilateral intervention of the US government, well described in Ruling the Root, which resulted in the creation of ICANN, thus sparking discussions of Internet governance in WSIS and elsewhere.  While Mueller is surely correct when he states that new governance methods are needed (p. 127), it seems a bit facile to conclude that “the nation-state is the wrong unit” and that it would be better to rely largely on “global Internet governance institutions rooted in non-state actors” (p. 129), without explaining how such institutions would be democratic and representative of all of the word’s citizens.

    Mueller correctly notes (p. 150) that, historically, there have major changes in sovereignty: emergence and falls of empires, creation of new nations, changes in national borders, etc.  But he fails to note that most of those changes were the result of significant violence and use of force.  If, as he hopes, the “Internet community” is to assert sovereignty and displace the existing sovereignty of states, how will it do so?  Through real violence?  Through cyber-violence?  Through civil disobedience (e.g. migrating to bitcoin, or implementing strong encryption no matter what governments think)?  By resisting efforts to move discussions into the World Trade Organization? Or by persuading states to relinquish power willingly?  It would have been good if Mueller had addressed, at least summarily, such questions.

    Before concluding, I note a number of more-or-less minor errors that might lead readers to imprecise understandings of important events and issues.  For example, p. 37 states that “the US and the Internet technical community created a global institution, ICANN”: in reality, the leaders of the Internet technical community obeyed the unilateral diktat of the US government (at first somewhat reluctantly and later willingly) and created a California non-profit company, ICANN.  And ICANN is not insulated from jurisdictional differences; it is fully subject to US laws and US courts.  The discussion on pp. 37-41 fails to take into account the fact that a significant portion of the DNS, the ccTLDs, is already aligned with national borders, and that there are non-national telephone numbers; the real differences between the DNS and telephone numbers are that most URLs are non-national, whereas few telephone numbers are non-national; that national telephone numbers are given only to residents of the corresponding country; and that there is an international real-time mechanism for resolving URLs that everybody uses, whereas each telephone operator has to set up its own resolving mechanism for telephone numbers.  Page 47 states that OSI was “developed by Europe-centered international organizations”, whereas actually it was developed by private companies from both the USA (including AT&T, Digital Equipment Corporation, Hewlett-Packard, etc.) and Europe working within global standards organizations (IEC, ISO, and ITU), who all happen to have secretariats in Geneva, Switzerland; whereas the Internet was initially developed and funded by an arm of the US Department of Defence and the foundation of the WWW was initially developed in a European intergovernmental organization.  Page 100 states that “The ITU has been trying to displace or replace ICANN since its inception in 1998”; whereas a correct statement would be “While some states have called for the ITU to displace or replace ICANN since its inception in 1998, such proposals have never gained significant support and appear to have faded away recently.”  Not everybody thinks that the IANA transition was a success (p. 117), nor that it is an appropriate model for the future (pp. 132-135; 136-137), and it is worth noting that ICANN successfully withstood many challenges (p. 100) while it had a formal link to the US government; it remains to be seen how ICANN will fare now that it is independent of the US government.  ICANN and the RIR’s do not have a “‘transnational’ jurisdiction created through private contracts” (p. 117); they are private entities subject to national law and the private contracts in question are also subject to national law (and enforced by national authorities, even if disputes are resolved by international arbitration).  I doubt that it is a “small step from community to nation” (p. 142), and it is not obvious why anti-capitalist movements (which tend to be internationalist) would “end up empowering territorial states and reinforcing alignment” (p. 147), when it is capitalist movements that rely on the power of territorial states to enforce national laws, for example regarding intellectual property rights.

    Despite these minor quibbles, this book, and its references (albeit not as extensive as one would have hoped), will be a valuable starting point for future discussions of internet alignment and/or “fragmentation.” Surely there will be much future discussion, and many more analyses and calls for action, regarding what may well be one of the most important issues that humanity now faces: the transition from the industrial era to the information era and the disruptions arising from that transition.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    Anthony Galluzzo — Utopia as Method, Social Science Fiction, and the Flight From Reality (Review of Frase, Four Futures)

    a review of Peter Frase, Four Futures: Life After Capitalism (Verso Jacobin Series, 2016)

    by Anthony Galluzzo

    ~

    Charlie Brooker’s acclaimed British techno-dystopian television series, Black Mirror, returned last year in a more American-friendly form. The third season, now broadcast on Netflix, opened with “Nosedive,” a satirical depiction of a recognizable near future when user-generated social media scores—on the model of Yelp reviews, Facebook likes, and Twitter retweets—determine life chances, including access to basic services, such as housing, credit, and jobs. The show follows striver Lacie Pound—played by Bryce Howard—who, in seeking to boost her solid 4.2 life score, ends up inadvertently wiping out all of her points, in the nosedive named by the episode’s title. Brooker offers his viewers a nightmare variation on a now familiar online reality, as Lacie rates every human interaction and is rated in turn, to disastrous result. And this nightmare is not so far from the case, as online reputational hierarchies increasingly determine access to precarious employment opportunities. We can see this process in today’s so-called sharing economy, in which user approval determines how many rides will go to the Uber driver, or if the room you are renting on Airbnb, in order to pay your own exorbitant rent, gets rented.

    Brooker grappled with similar themes during the show’s first season; for example, “Fifteen Million Merits” shows us a future world of human beings forced to spend their time on exercise bikes, presumably in order to generate power plus the “merits” that function as currency, even as they are forced to watch non-stop television, advertisements included. It is television—specifically a talent show—that offers an apparent escape to the episode’s protagonists. Brooker revisits these concerns—which combine anxieties regarding new media and ecological collapse in the context of a viciously unequal society—in the final episode of the new season, entitled “Hated in the Nation,” which features robotic bees, built for pollination in a world after colony collapse, that are hacked and turned to murderous use. Here is an apt metaphor for the virtual swarming that characterizes so much online interaction.

    Black Mirror corresponds to what literary critic Tom Moylan calls a “critical dystopia.” [1] Rather than a simple exercise in pessimism or anti-utopianism, Moylan argues that critical dystopias, like their utopian counterparts, also offer emancipatory political possibilities in exposing the limits of our social and political status quo, such as the naïve techno-optimism that is certainly one object of Brooker’s satirical anatomies. Brooker in this way does what Jacobin Magazine editor and social critic Peter Frase claims to do in his Four Futures: Life After Capitalism, a speculative exercise in “social science fiction” that uses utopian and dystopian science fiction as means to explore what might come after global capitalism. Ironically, Frase includes both online reputational hierarchies and robotic bees in his two utopian scenarios: one of the more dramatic, if perhaps inadvertent, ways that Frase collapses dystopian into utopian futures

    Frase echoes the opening lines of Marx and Engels’ Communist Manifesto as he describes the twin “specters of ecological catastrophe and automation” that haunt any possible post-capitalist future. While total automation threatens to make human workers obsolete, the global planetary crisis threatens life on earth, as we have known it for the past 12000 years or so. Frase contends that we are facing a “crisis of scarcity and a crisis of abundance at the same time,” making our moment one “full of promise and danger.” [2]

    The attentive reader can already see in this introductory framework the too-often unargued assumptions and easy dichotomies that characterize the book as a whole. For example, why is total automation plausible in the next 25 years, according to Frase, who largely supports this claim by drawing on the breathless pronouncements of a technophilic business press that has made similar promises for nearly a hundred years? And why does automation equal abundance—assuming the more egalitarian social order that Frase alternately calls “communism” or “socialism”—especially when we consider the  ecological crisis Frase invokes as one of his two specters? This crisis is very much bound to an energy-intensive technosphere that is already pushing against several of the planetary boundaries that make for a habitable planet; total automation would expand this same technosphere by several orders of magnitude, requiring that much more energy, materials, and  environmental sinks to absorb tomorrow’s life-sized iPhone or their corpses. Frase deliberately avoids these empirical questions—and the various debates among economists, environmental scientists and computer programmers about the feasibility of AI, the extent to which automation is actually displacing workers, and the ecological limits to technological growth, at least as technology is currently constituted—by offering his work as the “social science fiction” mentioned above, perhaps in the vein of Black Mirror. He distinguishes this method from futurism or prediction, as he writes, “science fiction is to futurism as social theory is to conspiracy theory.” [3]

    In one of his few direct citations, Frase invokes Marxist literary critic Fredric Jameson, who argues that conspiracy theory and its fictions are ideologically distorted attempts to map an elusive and opaque global capitalism: “Conspiracy, one is tempted to say, is the poor person’s cognitive mapping in the postmodern age; it is the degraded figure of the total logic of late capital, a desperate attempt to represent the latter’s system, whose failure is marked by its slippage into sheer theme and content.” [4] For Jameson, a more comprehensive cognitive map of our planetary capitalist civilization necessitates new forms of representation to better capture and perhaps undo our seemingly eternal and immovable status quo. In the words of McKenzie Wark, Jameson proposes nothing less than a “theoretical-aesthetic practice of correlating the field of culture with the field of political economy.” [5] And it is possibly with this “theoretical-aesthetic practice” in mind that Frase turns to science fiction as his preferred tool of social analysis.

    The book accordingly proceeds in the way of a grid organized around the coordinates “abundance/scarcity” and “egalitarianism/hierarchy”—in another echo of Jameson, namely his structuralist penchant for Greimas squares. Hence we get abundance with egalitarianism, or “communism,” followed by its dystopian counterpart, rentism, or hierarchical plenty in the first two futures; similarly, the final futures move from an equitable scarcity, or “socialism” to a hierarchical and apocalyptic “exterminism.” Each of these chapters begins with a science fiction, ranging from an ostensibly communist Star Trek to the exterminationist visions presented in Orson Scott Card’s Ender’s Game, upon which Frase builds his various future scenarios. These scenarios are more often than not commentaries on present day phenomena, such as 3D printers or the sharing economy, or advocacy for various measures, like a Universal Basic Income, which Frase presents as the key to achieving his desired communist future.

    With each of his futures anchored in a literary (or cinematic, or televisual) science fiction narrative, Frase’s speculations rely on imaginative literature, even as he avoids any explicit engagement with literary criticism and theory, such as the aforementioned work of  Jameson.  Jameson famously argues (see Jameson 1982, and the more elaborated later versions in texts such as Jameson 2005) that the utopian text, beginning with Thomas More’s Utopia, simultaneously offers a mystified version of dominant social relations and an imaginative space for rehearsing radically different forms of sociality. But this dialectic of ideology and utopia is absent from Frase’s analysis, where his select space operas are all good or all bad: either the Jetsons or Elysium.

    And, in a marked contrast with Jameson’s symptomatic readings, some science fiction is for Frase more equal than others when it comes to radical sociological speculation, as evinced by his contrasting views of George Lucas’s Star Wars and Gene Roddenberry’s Star Trek.  According to Frase, in “Star Wars, you don’t really care about the particularities of the galactic political economy,” while in Star Trek, “these details actually matter. Even though Star Trek and Star Wars might superficially look like similar tales of space travel and swashbuckling, they are fundamentally different types of fiction. The former exists only for its characters and its mythic narrative, while the latter wants to root its characters in a richly and logically structured social world.” [6]

    Frase here understates his investment in Star Trek, whose “structured social world” is later revealed as his ideal-type for a high tech fully automated luxury communism, while Star Wars is relegated to the role of the space fantasy foil. But surely the original Star Wars is at least an anticolonial allegory, in which a ragtag rebel alliance faces off against a technologically superior evil empire, that was intentionally inspired by the Vietnam War. Lucas turned to the space opera after he lost his bid to direct Apocalypse Now—which was originally based on Lucas’s own idea. According to one account of the franchise’s genesis, “the Vietnam War, which was an asymmetric conflict with a huge power unable to prevail against guerrilla fighters, instead became an influence on Star Wars. As Lucas later said, ‘A lot of my interest in Apocalypse Now carried over into Star Wars.” [7]

    Texts—literary, cinematic, and otherwise—often combine progressive and reactionary, utopian and ideological elements. Yet it is precisely the mixed character of speculative narrative that Frase ignores throughout his analysis, reducing each of his literary examples to unequivocally good or bad, utopian or dystopian, blueprints for “life after capitalism.” Why anchor radical social analysis in various science fictions while refusing basic interpretive argument? As with so much else in Four Futures, Frase uses assumption—asserting that Star Trek has one specific political valence or that total automation guided by advanced AI is an inevitability within 25 years—in the service of his preferred policy outcomes (and the nightmare scenarios that function as the only alternatives to those outcomes), while avoiding engagement with debates related to technology, ecology, labor, and the utopian imagination.

    Frase in this way evacuates the politically progressive and critical utopian dimensions from George Lucas’s franchise, elevating the escapist and reactionary dimensions that represent the ideological, as opposed to the utopian, pole of this fantasy. Frase similarly ignores the ideological elements of Roddenberry’s Star Trek: “The communistic quality of the Star Trek universe is often obscured because the films and TV shows are centered on the military hierarchy of Starfleet, which explores the galaxy and comes into conflict with alien races. But even this seems largely a voluntarily chosen hierarchy.” [8]

    Frase’s focus, regarding Star Trek, is almost entirely on the replicators  that can make something,  anything, from nothing, so that Captain Picard, from the eighties era series reboot, orders a “cup of Earl Grey, hot,” from one of these magical machines, and immediately receives Earl Grey, hot. Frase equates our present-day 3D printers with these same replicators over the course of all his four futures, despite the fact that unlike replicators, 3D printers require inputs: they do not make matter, but shape it.

    3D printing encompasses a variety of processes in which would-be makers create an image with a computer and CAD (computer aided design) software, which in turn provides a blueprint for the three-dimensional object to be “printed.” This requires either the addition of material—usually plastic—and the injection of that material into a mould.  The most basic type of 3D printing involves heating  “(plastic, glue-based) material that is then extruded through a nozzle. The nozzle is attached to an apparatus similar to a normal 2D ink-jet printer, just that it moves up and down, as well. The material is put on layer over layer. The technology is not substantially different from ink-jet printing, it only requires slightly more powerful computing electronics and a material with the right melting and extrusion qualities.” [9] This is still the most affordable and pervasive way to make objects with 3D printers—most often used to make small models and components. It is also the version of 3D printing that lends itself to celebratory narratives of post-industrial techno-artisanal home manufacture pushed by industry cheerleaders and enthusiasts alike. Yet, the more elaborate versions of 3D printing—“printing’ everything from complex machinery to  food to human organs—rely on the more complex and  expensive industrial versions of the technology that require lasers (e.g., stereolithography and selective laser sintering).  Frase espouses a particular left techno-utopian line that sees the end of mass production in 3D printing—especially with the free circulation of the programs for various products outside of our intellectual property regime; this is how he distinguishes his communist utopia from the dystopian rentism that most resembles our current moment,  with material abundance taken for granted. And it is this fantasy of material abundance and post-work/post-worker production that presumably appeals to Frase, who describes himself as an advocate of “enlightened Luddism.”

    This is an inadvertently ironic characterization, considering the extent to which these emancipatory claims conceal and distort the labor discipline imperative that is central to the shape and development of this technology, as Johan Söderberg argues, “we need to put enthusiastic claims for 3D printers into perspective. One claim is that laid-off American workers can find a new source of income by selling printed goods over the Internet, which will be an improvement, as degraded factory jobs are replaced with more creative employment opportunities. But factory jobs were not always monotonous. They were deliberately made so, in no small part through the introduction of the same technology that is expected to restore craftsmanship. ‘Makers’ should be seen as the historical result of the negation of the workers’ movement.” [10]

    Söderberg draws on the work of David Noble, who outlines how the numerical control technology central to the growth of post-war factory automation was developed specifically to de-skill and dis-empower workers during the Cold War period. Unlike Frase, both of these authors foreground those social relations, which include capital’s need to more thoroughly exploit and dominate labor, embedded in the architecture of complex megatechnical systems, from  factory automation to 3D printers. In collapsing 3D printers into Star Trek-style replicators, Frase avoids these questions as well as the more immediately salient issue of resource constraints that should occupy any prognostication that takes the environmental crisis seriously.

    The replicator is the key to Frase’s dream of endless abundance on the model of post-war US style consumer affluence and the end of all human labor. But, rather than a simple blueprint for utopia, Star Trek’s juxtaposition of techno-abundance with military hierarchy and a tacitly expansionist galactic empire—despite the show’s depiction of a Starfleet “prime directive” that forbids direct intervention into the affairs of the extraterrestrial civilizations encountered by the federation’s starships, the Enterprise’s crew, like its ostensibly benevolent US original, almost always intervenes—is significant. The original Star Trek is arguably a liberal iteration of Kennedy-era US exceptionalism, and reflects a moment in which relatively wide-spread first world abundance was underwritten by the deliberate underdevelopment, appropriation, and exploitation of various “alien races’” resources, land, and labor abroad. Abundance in fact comes from somewhere and some one.

    As historian H. Bruce Franklin argues, the original series reflects US Cold War liberalism, which combined Roddenberry’s progressive stances regarding racial inclusion within the parameters of the United States and its Starfleet doppelganger, with a tacitly anti-communist expansionist viewpoint, so that the show’s Klingon villains often serve as proxies for the Soviet menace. Franklin accordingly charts the show’s depictions of the Vietnam War, moving from a pro-war and pro-American stance to a mildly anti-war position in the wake of the Tet Offensive over the course of several episodes: “The first of these two episodes, ‘The City on the Edge of Forever‘ and ‘A Private Little War,’ had suggested that the Vietnam War was merely an unpleasant necessity on the way to the future dramatized by Star Trek. But the last two, ‘The Omega Glory‘ and ‘Let That Be Your Last Battlefield,’ broadcast in the period between March 1968 and January 1969, are so thoroughly infused with the desperation of the period that they openly call for a radical change of historic course, including an end to the Vietnam War and to the war at home.” [11]

    Perhaps Frase’s inattention to Jameson’s dialectic of ideology and utopia reflects a too-literal approach to these fantastical narratives, even as he proffers them as valid tools for radical political and social analysis. We could see in this inattention a bit too much of the fan-boy’s enthusiasm, which is also evinced by the rather narrow and backward-looking focus on post-war space operas to the exclusion of the self-consciously radical science fiction narratives of Ursula LeGuin, Samuel Delany, and Octavia Butler, among others. These writers use the tropes of speculative fiction to imagine profoundly different social relations that are the end-goal of all emancipatory movements. In place of emancipated social relations, Frase too often relies on technology and his readings must in turn be read with these limitations in mind.

    Unlike the best speculative fiction, utopian or dystopian, Frase’s “social science fiction” too often avoids the question of social relations—including the social relations embedded in the complex megatechnical systems Frase  takes for granted as neutral forces of production. He accordingly announces at the outset of his exercise: “I will make the strongest assumption possible: all need for human labor in the production process can be eliminated, and it is possible to live a life of pure leisure while machines do all the work.” [12] The science fiction trope effectively absolves Frase from engagement with the technological, ecological, or social feasibility of these predictions, even as he announces his ideological affinities with a certain version of post- and anti-work politics that breaks with orthodox Marxism and its socialist variants.

    Frase’s Jetsonian vision of the future resonates with various futurist currents that  can we now see across the political spectrum, from the Silicon Valley Singulitarianism of Ray Kurzweil or Elon Musk, on the right, to various neo-Promethean currents on the left, including so-called “left accelerationism.” Frase defends his assumption as a desire “to avoid long-standing debates about post-capitalist organization of the production process.” While such a strict delimitation is permissible for speculative fiction—an imaginative exercise regarding what is logically possible, including time travel or immortality—Frase specifically offers science fiction as a mode of social analysis, which presumably entails grappling with rather than avoiding current debates on labor, automation, and the production process.

    Ruth Levitas, in her 2013 book Utopia as Method: The Imaginary Reconstitution of Society, offers a more rigorous definition of social science fiction via her eponymous “utopia as method.”  This method combines sociological analysis and imaginative speculation, which Levitas defends as “holistic. Unlike political philosophy and political theory, which have been more open than sociology to normative approaches, this holism is expressed at the level of concrete social institutions and processes.” [13] But that attentiveness to concrete social institutions and practices combined with counterfactual speculation regarding another kind of human social world are exactly what is missing in Four Futures. Frase uses grand speculative assumptions-such as the inevitable rise of human-like AI or the complete disappearance of human labor, all within 25 years or so—in order to avoid significant debates that are ironically much more present in purely fictional works, such as the aforementioned Black Mirror or the novels of Kim Stanley Robinson, than in his own overtly non-fictional speculations. From the standpoint of radical literary criticism and radical social theory, Four Futures is wanting. It fails as analysis. And, if one primary purpose of utopian speculation, in its positive and negative forms, is to open an imaginative space in which wholly other forms of human social relations can be entertained, Frase’s speculative exercise also exhibits a revealing paucity of imagination.

    This is most evident in Frase’s most  explicitly utopian future, which he calls “communism,” without any mention of class struggle, the collective ownership of the means of production, or any of the other elements we usually associate with “communism”; instead, 3D printers-cum-replicators will produce whatever you need whenever you need it at home, an individualizing techno-solution to the problem of labor, production, and its organization that resembles alchemy in its indifference to material reality and the scarce material inputs required by 3D printers. Frase proffers a magical vision of technology so as to avoid grappling with the question of social relations; even more than this, in the coda to this chapter, Frase reveals the extent to which current patterns of social organization and stratification remain under Frase’s “communism.” Frase begins this coda with a question: “in a communist society, what do we do all day?”  To which he responds: “The kind of communism   I’ve described is sometimes mistakenly construed, by both its critics and its adherents,  as a society in which hierarchy and conflict are wholly absent. But rather than see the abolition of the capital-wage relation as a single shot solution to all possible social problems, it is perhaps better to think of it in the terms used by political scientist, Corey Robin, as a way to ‘convert hysterical misery into ordinary unhappiness.’” [14]

    Frase goes on to argue—rightly—that the abolition of class society or wage labor will not put an end to a variety of other oppressions, such as those based in gender and racial stratification; he in this way departs from the class reductionist tendencies sometimes on view in the magazine he edits.  His invocation of Corey Robin is nonetheless odd considering the Promethean tenor of Frase’s preferred futures. Robin contends that while the end of exploitation, and capitalist social relations, would remove the major obstacle to  human flourishing, human beings will remain finite and fragile creatures in a finite and fragile world. Robin in this way overlaps with Fredric Jameson’s remarkable essay on Soviet writer Andre Platonov’s Chevengur, in which Jameson writes: “Utopia is merely the political and social solution of collective life: it does not do away with the tensions and inherent contradictions  inherent in both interpersonal relations and in bodily existence itself (among them, those of sexuality), but rather exacerbates those and allows them free rein, by removing the artificial miseries of money and self-preservation [since] it is not the function of Utopia to bring the dead back to life nor abolish death in the first place.” [15] Both Jameson and Robin recall Frankfurt School thinker Herbert Marcuse’s distinction between necessary and surplus repression: while the latter encompasses all of the unnecessary miseries attendant upon a class stratified form of social organization that runs on exploitation, the former represents the necessary adjustments we make to socio-material reality and its limits.

    It is telling that while Star Trek-style replicators fall within the purview of the possible for Frase, hierarchy, like death, will always be with us, since he at least initially argues that status hierarchies will persist after the “organizing force of the capital relation has been removed” (59). Frase oscillates between describing these status hierarchies as an unavoidable, if unpleasant, necessity and a desirable counter to the uniformity of an egalitarian society. Frase illustrates this point in recalling Cory Doctorow’s Down and Out in The Magic Kingdom, a dystopian novel that depicts a world where all people’s needs are met at the same time that everyone competes for reputational “points”—called Whuffie—on the model of Facebook “likes” and Twitter retweets. Frase’s communism here resembles the world of Black Mirror described above.  Although Frase shifts from the rhetoric of necessity to qualified praise in an extended discussion of Dogecoin, an alternative currency used to tip or “transfer a small number of to another Internet user in appreciation of their witty and helpful contributions” (60). Yet Dogecoin, among all cryptocurrencies, is mostly a joke, and like many cryptocurrencies is one whose “decentralized” nature scammers have used to their own advantage, most famously in 2015. In the words of one former enthusiast: “Unfortunately, the whole ordeal really deflated my enthusiasm for cryptocurrencies. I experimented, I got burned, and I’m moving on to less gimmicky enterprises.” [16]

    But how is this dystopian scenario either necessary or desirable?  Frase contends that “the communist society I’ve sketched here, though imperfect, is at least one in which conflict is no longer based in the opposition between wage workers and capitalists or on struggles…over scarce resources” (67). His account of how capitalism might be overthrown—through a guaranteed universal income—is insufficient, while resource scarcity and its relationship to techno-abundance remains unaddressed in a book that purports to take the environmental crisis seriously. What is of more immediate interest in the case of this coda to his most explicitly utopian future is Frase’s non-recognition of how internet status hierarchies and alternative currencies are modeled on and work in tandem with capitalist logics of entrepreneurial selfhood. We might consider Pierre Bourdieu’s theory of social and cultural capital in this regard, or how these digital platforms and their ever-shifting reputational hierarchies are the foundation of what Jodi Dean calls “communicative capitalism.” [17]

    Yet Frase concludes his chapter by telling his readers that it would be a “misnomer” to call his communist future an “egalitarian configuration.” Perhaps Frase offers his fully automated Facebook utopia as counterpoint to the Cold War era critique of utopianism in general and communism in particular: it leads to grey uniformity and universal mediocrity. This response—a variation on Frase’s earlier discussion of Star Trek’s “voluntary hierarchy”—accepts the premise of the Cold War anti-utopian criticisms, i.e., how the human differences that make life interesting, and generate new possibilities, require hierarchy of some kind. In other words, this exercise in utopian speculation cannot move outside the horizon of our own present day ideological common sense.

    We can again see this tendency at the very start of the book. Is total automation an unambiguous utopia or a reflection of Frase’s own unexamined ideological proclivities, on view throughout the various futures, for high tech solutions to complex socio-ecological problems? For various flavors of deus ex machina—from 3D printers to replicators to robotic bees—in place of social actors changing the material realities that constrain them through collective action? Conversely, are the “crisis of scarcity” and the visions of ecological apocalypse Frase evokes intermittently throughout his book purely dystopian or ideological? Surely, since Thomas Malthus’s 1798 Essay on Population, apologists for various ruling orders have used the threat of scarcity and material limits to justify inequity, exploitation, and class division: poverty is “natural.” Yet, can’t we also discern in contemporary visions of apocalypse a radical desire to break with a stagnant capitalist status quo? And in the case of the environmental state of emergency, don’t we have a rallying point for constructing a very different eco-socialist order?

    Frase is a founding editor of Jacobin magazine and a long-time member of the Democratic Socialists of America. He nonetheless distinguishes himself from the reformist and electoral currents at those organizations, in addition to much of what passes for orthodox Marxism. Rather than full employment—for example—Frase calls for the abolition of work and the working class in a way that echoes more radical anti-work and post-workerist modes of communist theory. So, in a recent editorial published by Jacobin, entitled “What It Means to Be on the Left,” Frase differentiates himself from many of his DSA comrades in declaring that “The socialist project, for me, is about something more than just immediate demands for more jobs, or higher wages, or universal social programs, or shorter hours. It’s about those things. But it’s also about transcending, and abolishing, much of what we think defines our identities and our way of life.” Frase goes on to sketch an emphatically utopian communist horizon that includes the abolition of class, race, and gender as such. These are laudable positions, especially when we consider a new new left milieu some of whose most visible representatives dismiss race and gender concerns as “identity politics,” while redefining radical class politics as a better deal for some amorphous US working class within an apparently perennial capitalist status quo.

    Frase’s utopianism in this way represents an important counterpoint within this emergent left. Yet his book-length speculative exercise—policy proposals cloaked as possible scenarios—reveals his own enduring investments in the simple “forces vs. relations of production” dichotomy that underwrote so much of twentieth century state socialism with its disastrous ecological record and human cost.  And this simple faith in the emancipatory potential of capitalist technology—given the right political circumstances despite the complete absence of what creating those circumstances might entail— frequently resembles a social democratic version of the Californian ideology or the kind of Silicon Valley conventional wisdom pushed by Elon Musk. This is a more efficient, egalitarian, and techno-utopian version of US capitalism. Frase mines various left communist currents, from post-operaismo to communization, only to evacuate these currents of their radical charge in marrying them to technocratic and technophilic reformism, hence UBI plus “replicators” will spontaneously lead to full communism. Four Futures is in this way an important, because symptomatic, expression of what Jason Smith (2017) calls “social democratic accelerationism,” animated by a strange faith in magical machines in addition to a disturbing animus toward ecology, non-human life, and the natural world in general.

    _____

    Anthony Galluzzo earned his PhD in English Literature at UCLA. He specializes in radical transatlantic English language literary cultures of the late eighteenth- and nineteenth centuries. He has taught at the United States Military Academy at West Point, Colby College, and NYU.

    Back to the essay

    _____

    Notes

    [1] See Tom Moylan, Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia (Boulder: Westview Press, 2000).

    [2] Peter Frase, Four Futures: Life After Capitalism. (London: Verso Books, 2016),
    3.

    [3] Ibid, 27.

    [4] Fredric Jameson,  “Cognitive Mapping.” In C. Nelson and L. Grossberg, eds. Marxism and the Interpretation of Culture (Illinois: University of Illinois Press, 1990), 6.

    [5] McKenzie Wark, “Cognitive Mapping,” Public Seminar (May 2015).

    [6] Frase, 24.

    [7] This space fantasy also exhibits the escapist, mythopoetic, and even reactionary elements Frase notes—for example, its hereditary caste of Jedi fighters and their ancient religion—as Benjamin Hufbauer notes, “in many ways, the political meanings in Star Wars were and are progressive, but in other ways the film can be described as middle-of-the-road, or even conservative. Hufbauer, “The Politics Behind the Original Star Wars,” Los Angeles Review of Books (December 21, 2015).

    [8] Frase, 49.

    [9]  Angry Workers World, “Soldering On: Report on Working in a 3D-Printer Manufacturing Plant in London,” libcom. org (March 24, 2017).

    [10] Johan Söderberg, “A Critique of 3D Printing as a Critical Technology,” P2P Foundation (March 16, 2013).

    [11] Franklin, “Star Trek in the Vietnam Era,” Science Fiction Studies, #62 = Volume 21, Part 1 (March 1994).

    [12] Frase, 6.

    [13] Ruth Levitas, Utopia As Method: The Imaginary Reconstitution of Society. (London: Palgrave Macmillan, 2013), xiv-xv.

    [14] Frase, 58.

    [15]  Jameson, “Utopia, Modernism, and Death,” in Seeds of Time (New York: Columbia University Press, 1996), 110.

    [16]  Kaleigh Rogers, “The Guy Who Ruined Dogecoin,” VICE Motherboard (March 6, 2015).

    [17] See Jodi Dean, Democracy and Other Neoliberal Fantasies: Communicative Capitalism and Left  Politics (Durham: Duke University Press, 2009).

    _____

    Works Cited

    • Frase, Peter. 2016. Four Futures: Life After Capitalism. New York: Verso.
    • Jameson, Fredric. 1982. “Progress vs. Utopia; Or Can We Imagine The Future?” Science Fiction Studies 9:2 (July). 147-158
    • Jameson, Fredric. 1996. “Utopia, Modernism, and Death,” in Seeds of Time. New York: Columbia University Press.
    • Jameson, Fredric. 2005. Archaeologies of the Future: The Desire Called Utopia and Other Science Fictions. London: Verso.
    • Levitas, Ruth. 2013. Utopia As Method; The Imaginary Reconstitution of Society. London: Palgrave Macmillan.
    • Moylan, Tom. 2000. Scraps of the Untainted Sky: Science Fiction, Utopia, Dystopia. Boulder: Westview Press.
    • Smith, Jason E. 2017. “Nowhere To Go: Automation Then And Now.” The Brooklyn Rail (March 1).

     

  • Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    Michael Miller — Seeing Ourselves, Loving Our Captors: Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age

    a review of Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age (University of Minnesota Press Forerunners Series, 2016)

    by Michael Miller

    ~

    All existence is Beta, basically. A ceaseless codependent improvement unto death, but then death is not even the end. Nothing will be finalized. There is no end, no closure. The search will outlive us forever

    — Joshua Cohen, Book of Numbers

    Being a (in)human is to be a beta tester

    — Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    Too many people have access to your state of mind

    — Renata Adler, Speedboat

    Whenever I read through Vilém Flusser’s vast body of work and encounter, in print no less, one of the core concepts of his thought—which is that “human communication is unnatural” (2002, 5)––I find it nearly impossible to shake the feeling that the late Czech-Brazilian thinker must have derived some kind of preternatural pleasure from insisting on the ironic gesture’s repetition. Flusser’s rather grim view that “there is no possible form of communication that can communicate concrete experience to others” (2016, 23) leads him to declare that the intersubjective dimension of communication implies inevitably the existence of a society which is, in his eyes, itself an unnatural institution. One can find all over in Flusser’s work traces of his life-long attempt to think through the full philosophical implications of European nihilism, and evidence of this intellectual engagement can be readily found in his theories of communication.

    One of Flusser’s key ideas that draws me in is his notion that human communication affords us the ability to “forget the meaningless context in which we are completely alone and incommunicado, that is, the world in which we are condemned to solitary confinement and death: the world of ‘nature’” (2002, 4). In order to help stave off the inexorable tide of nature’s muted nothingness, Flusser suggests that humans communicate by storing memories, externalized thoughts whose eventual transmission binds two or more people into a system of meaning. Only when an intersubjective system of communication like writing or speech is established between people does the purpose of our enduring commitment to communication become clear: we communicate in order “to become immortal within others (2016, 31). Flusser’s playful positing of the ironic paradox inherent in the improbability of communication—that communication is unnatural to the human but it is also “so incredibly rich despite its limitations” (26)––enacts its own impossibility. In a representatively ironic sense, Flusser’s point is that all we are able to fully understand is our inability to understand fully.

    As Flusser’s theory of communication can be viewed as his response to the twentieth-century’s shifting technical-medial milieu, his ideas about communication and technics eventually led him to conclude that “the original intention of producing the apparatus, namely, to serve the interests of freedom, has turned on itself…In a way, the terms human and apparatus are reversed, and human beings operate as a function of the apparatus. A man gives an apparatus instructions that the apparatus has instructed him to give” (2011,73).[1] Flusser’s skeptical perspective toward the alleged affordances of human mastery over technology is most assuredly not the view that Apple or Google would prefer you harbor (not-so-secretly). Any cursory glance at Wired or the technology blog at Insider Higher Ed, to pick two long-hanging examples, would yield a radically different perspective than the one Flusser puts forth in his work. In fact, Flusser writes, “objects meant to be media may obstruct communication” (2016, 45). If media objects like the technical apparatuses of today actually obstruct communication, then why are we so often led to believe that they facilitate it? And to shift registers just slightly, if everything is said to be an object of some kind—even technical apparatuses––then cannot one be permitted to claim daily communion with all kinds of objects? What happens when an object—and an object as obsolete as a book, no less—speaks to us? Will we still heed its call?

    ***

    Speaking in its expanded capacity as neither narrator nor focalized character, the book as literary object addresses us in a direct and antagonistic fashion in the opening line to Joshua Cohen’s 2015 novel Book of Numbers. “If you’re reading this on a screen, fuck off. I’ll only talk if I’m gripped with both hands” (5), the book-object warns. As Cohen’s narrative tells the story of a struggling writer named Joshua Cohen (whose backstory corresponds mostly to the historical-biographical author Joshua Cohen) who is contracted to ghostwrite the memoir of another Joshua Cohen (who is the CEO of a massive Google-type company named Tetration), the novel’s middle section provides an “unedited” transcript of the conversation between the two Cohens in which the CEO recounts his upbringing and tremendous business success in and around the Bay Area from the late 1970s up to 2013 of the narrative’s present. The novel’s Silicon Valley setting, nominal and characterological doubling, and structural narrative coupling of the two Cohens’ lives makes it all but impossible to distinguish the personal histories of Cohen-the-CEO and Cohen-the-narrator from the cultural history of the development of personal computing and networked information technologies. The history of one Joshua Cohen––or all Joshua Cohens––is indistinguishable from the history of intrusive computational/digital media. “I had access to stuff I shouldn’t have had access to, but then Principal shouldn’t have had such access to me—cameras, mics,” Cohen-the-narrator laments. In other words, as Cohen-the-narrator ghostwrites another Cohen’s memoir within the context of the broad history of personal computing and the emergence of algorithmic governance and surveillance, the novel invites us to consider how the history of an individual––or every individual, it does not really matter––is also nothing more or anything less than the surveilled history of its data usage, which is always written by someone or something else, the ever-present Not-Me (who just might have the same name as me). The Self is nothing but a networked repository of information to be mined in the future.

    While the novel’s opening line addresses its hypothetical reader directly, its relatively benign warning fixes reader and text in a relation of rancor. The object speaks![2] And yet tech-savvy twenty-first century readers are not the only ones who seem to be fed up with books; books too are fed up with us, and perhaps rightly so. In an age when objects are said to speak vibrantly and withdraw infinitely; processes like human cognition are considered to be operative in complex technical-computational systems; and when the only excuse to preserve the category of “subjective experience” we are able to muster is that it affords us the ability “to grasp how networks technically distribute and disperse agency,” it would seem at first glance that the second-person addressee of the novel’s opening line would intuitively have to be a reading, thinking subject.[3] Yet this is the very same reading subject who has been urged by Cohen’s novel to politely “fuck off” if he or she has chosen to read the text on a screen. And though the text does not completely dismiss its readers who still prefer “paper of pulp, covers of board and cloth” (5), a slight change of preposition in its title points exactly to what the book fears most of all: Book as Numbers. The book-object speaks, but only to offer an ominous admonition: neither the book nor its readers ought to be reducible to computable numbers.

    The transduction of literary language into digital bits eliminates the need for a phenomenological, reading subject, and it suggests too that literature––or even just language in a general sense––and humans in particular are ontologically reducible to data objects that can be “read” and subsequently “interpreted” by computational algorithms. As Cohen’s novel collapses the distinction between author, narrator, character, and medium, its narrator observes that “the only record of my one life would be this record of another’s” (9). But in this instance, the record of one’s (or another’s) life is merely the history of how personal computational technologies have effaced the phenomenological subject. How have we arrived at the theoretically permissible premise that “People matter, but they don’t occupy a privileged subject position distinct from everything else in the world” (Huehls 20)? How might the “turn toward ontology” in theory/philosophy be viewed as contributing to our present condition?

    * **

    Mark Jarzombek’s Digital Stockholm Syndrome in the Post-Ontological Age (2016) provides a brief, yet stylistically ironic and incisive interrogation into how recent iterations of post- or inhumanist theory have found a strange bedfellow in the rhetorical boosterism that accompanies the alleged affordances of digital technologies and big data. Despite the differences between these two seemingly unrelated discourses, they both share a particularly critical or diminished conception of the anthro- in “anthropocentrism” that borrows liberally from the postulates of the “ontological turn” in theory/philosophy (Rosenberg n.p.). While the parallels between these discourses are not made explicit in Jarzombek’s book, Digital Stockholm Syndrome asks us to consider how a shared commitment to an ontologically diminished view of “the human” that galvanizes both technological determinism’s anti-humanism and post- or inhumanist theory has found its common expression in recent philosophies of ontology. In other words, the problem Digital Stockholm Syndrome takes up is this: what kind of theory of ontology, Being, and to a lesser extent, subjectivity, appeals equally to contemporary philosophers and Silicon Valley tech-gurus? Jarzombek gestures toward such an inquiry early on: “What is this new ontology?” he asks, and “What were the historical situations that produced it? And how do we adjust to the realities of the new Self?” (x).

    A curious set of related philosophical commitments united by their efforts to “de-center” and occasionally even eject “anthropocentrism” from the critical conversation constitute some of the realities swirling around Jarzombek’s “new Self.”[4] Digital Stockholm Syndrome provocatively locates the conceptual legibility of these philosophical realities squarely within an explicitly algorithmic-computational historical milieu. By inviting such a comparison, Jarzombek’s book encourages us to contemplate how contemporary ontological thought might mediate our understanding of the historical and philosophical parallels that bind the tradition of in humanist philosophical thinking and the rhetoric of twenty-first century digital media.[5]

    In much the same way that Alexander Galloway has argued for a conceptual confluence that exceeds the contingencies of coincidence between “the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism” (347), Digital Stockholm Syndrome argues similarly that today’s world is “designed from the micro/molecular level to fuse the algorithmic with the ontological” (italics in original, x).[6] We now understand Being as the informatic/algorithmic byproduct of what ubiquitous computational technologies have gathered and subsequently fed back to us. Our personal histories––or simply the records of our data use (and its subsequent use of us)––comprise what Jarzombek calls our “ontic exhaust…or what data experts call our data exhaust…[which] is meticulously scrutinized, packaged, formatted, processed, sold, and resold to come back to us in the form of entertainment, social media, apps, health insurance, clickbait, data contracts, and the like” (x).

    The empty second-person pronoun is placed on equal ontological footing with, and perhaps even defined by, its credit score, medical records, 4G data usage, Facebook likes, and threefold of its Tweets. “The purpose of these ‘devices,’” Jarzombek writes, “is to produce, magnify, and expose our ontic exhaust” (25). We give our ontic exhaust away for free every time we log into Facebook because it, in return, feeds back to us the only sense of “self” we are able to identify as “me.”[7] If “who we are cannot be traced from the human side of the equation, much less than the analytic side.‘I’ am untraceable” (31), then why do techno-determinists and contemporary oracles of ontology operate otherwise? What accounts for their shared commitment to formalizing ontology? Why must the Self be tracked and accounted for like a map or a ledger?

    As this “new Self,” which Jarzombek calls the “Being-Global” (2), travels around the world and checks its bank statement in Paris or tags a photo of a Facebook friend in Berlin while sitting in a cafe in Amsterdam, it leaks ontic exhaust everywhere it goes. While the hoovering up of ontic exhaust by GPS and commercial satellites “make[s] us global,” it also inadvertently redefines Being as a question of “positioning/depositioning” (1). For Jarzombek, the question of today’s ontology is not so much a matter of asking “what exists?” but of asking “where is it and how can it be found?” Instead of the human who attempts to locate and understand Being, now Being finds us, but only as long as we allow ourselves to be located.

    Today’s ontological thinking, Jarzombek points out, is not really interested in asking questions about Being––it is too “anthropocentric.”[8] Ontology in the twenty-first century attempts to locate Being by gathering data, keeping track, tracking changes, taking inventory, making lists, listing litanies, crunching the numbers, and searching the database. “Can I search for it on Google?” is now the most important question for ontological thought in the twenty-first century.

    Ontological thinking––which today means ontological accounting, or finding ways to account for the ontologically actuarial––is today’s philosophical equivalent to a best practices for data management, except there is no difference between one’s data and one’s Self. Nonetheless, any ontological difference that might have once stubbornly separated you from data about you no longer applies. Digital Stockholm Syndrome identifies this shift with the formulation: “From ontology to trackology” (71).[9] The philosophical shift that has allowed data about the Self to become the ontological equivalent to the Self emerges out of what Jarzombek calls an “animated ontology.”

    In this “animated ontology,” subject position and object position are indistinguishable…The entire system of humanity is microprocessed through the grid of sequestered empiricism” (31, 29). Jarzombek is careful to distinguish his “animated ontology” from the recently rebooted romanticisms which merely turn their objects into vibrant subjects. He notes that “the irony is that whereas the subject (the ‘I’) remains relatively stable in its ability to self-affirm (the lingering by-product of the psychologizing of the modern Self), objectivity (as in the social sciences) collapses into the illusions produced by the global cyclone of the informatic industry” (28).”[10] By devising tricky new ways to flatten ontology (all of which are made via po-faced linguistic fiat), “the human and its (dis/re-)embodied computational signifiers are on equal footing”(32). I do not define my data, but my data define me.

    ***

    Digital Stockholm Syndrome asserts that what exists in today’s ontological systems––systems both philosophical and computational––is what can be tracked and stored as data. Jarzombek sums up our situation with another pithy formulation: “algorithmic modeling + global positioning + human scaling +computational speed=data geopolitics” (12). While the universalization of tracking technologies defines the “global” in Jarzombek’s Being-Global, it also provides us with another way to understand the humanities’ enthusiasm for GIS and other digital mapping platforms as institutional-disciplinary expressions of a “bio-chemo-techno-spiritual-corporate environment that feeds the Human its sense-of-Self” (5).

    Mark Jarzombek, Digital Stockholm Syndrome in the Post-Ontological Age

    One wonders if the incessant cultural and political reminders regarding the humanities’ waning relevance have moved humanists to reconsider the very basic intellectual terms of their broad disciplinary pursuits. How come it is humanities scholars who are in some cases most visibly leading the charge to overturn many decades of humanist thought? Has the internalization of this depleted conception of the human reshaped the basic premises of humanities scholarship, Digital Stockholm Syndrome wonders? What would it even mean to pursue a “humanities” purged of “the human?” And is it fair to wonder if this impoverished image of humanity has trickled down into the formation of new (sub)disciplines?”[11]

    In a late chapter titled “Onto-Paranoia,” Jarzombek finally arrives at a working definition of Digital Stockholm Syndrome: data visualization. For Jarzombek, data-visualization “has been devised by the architects of the digital world” to ease the existential torture—or “onto-torture”—that is produced by Security Threats (59). Security threats are threatening because they remind us that “security is there to obscure the fact that whole purpose is to produce insecurity” (59). When a system fails, or when a problem occurs, we need to be conscious of the fact that the system has not really failed; “it means that the system is working” (61).[12] The Social, the Other, the Not-Me—these are all variations of the same security threat, which is just another way of defining “indeterminacy” (66). So if everything is working the way it should, we rarely consider the full implications of indeterminacy—both technical and philosophical—because to do so might make us paranoid, or worse: we would have to recognize ourselves as (in)human subjects.

    Data-visualizations, however, provide a soothing salve which we can (self-)apply in order to ease the pain of our “onto-torture.” Visualizing data and creating maps of our data use provide us with a useful and also pleasurable tool with which we locate ourselves in the era of “post-ontology.”[13] “We experiment with and develop data visualization and collection tools that allow us to highlight urban phenomena. Our methods borrow from the traditions of science and design by using spatial analytics to expose patterns and communicating those results, through design, to new audiences,” we are told by one data-visualization project (http://civicdatadesignlab.org/).  As we affirm our existence every time we travel around the globe and self-map our location, we silently make our geo-data available for those who care to sift through it and turn it into art or profit.

    “It is a paradox that our self-aestheticizing performance as subjects…feeds into our ever more precise (self-)identification as knowable and predictable (in)human-digital objects,” Jarzombek writes. Yet we ought not to spend too much time contemplating the historical and philosophical complexities that have helped create this paradoxical situation. Perhaps it is best we do not reach the conclusion that mapping the Self as an object on digital platforms increases the creeping unease that arises from the realization that we are mappable, hackable, predictable, digital objects––that our data are us. We could, instead, celebrate how our data (which we are and which is us) is helping to change the world. “’Big data’ will not change the world unless it is collected and synthesized into tools that have a public benefit,” the same data visualization project announces on its website’s homepage.

    While it is true that I may be a little paranoid, I have finally rested easy after having read Digital Stockholm Syndrome because I now know that my data/I are going to good use.[14] Like me, maybe you find comfort in knowing that your existence is nothing more than a few pixels in someone else’s data visualization.

    _____

    Michael Miller is a doctoral candidate in the Department of English at Rice University. His work has appeared or is forthcoming in symplokē and the Journal of Film and Video.

    Back to the essay

    _____

    Notes

    [1] I am reminded of a similar argument advanced by Tung-Hui Hu in his A Prehistory of the Cloud (2016). Encapsulating Flusser’s spirit of healthy skepticism toward technical apparatuses, the situation that both Flusser and Hu fear is one in which “the technology has produced the means of its own interpretation” (xixx).

    [2] It is not my aim to wade explicitly into discussions regarding “object-oriented ontology” or other related philosophical developments. For the purposes of this essay, however, Andrew Cole’s critique of OOO as a “new occasionalism” will be useful. “’New occasionalism,’” Cole writes, “is the idea that when we speak of things, we put them into contact with one another and ourselves” (112). In other words, the speaking of objects makes them objectively real, though this is only possible when everything is considered to be an object. The question, though, is not about what is or is not an object, but is rather what it means to be. For related arguments regarding the relation between OOO/speculative realism/new materialism and mysticism, see Sheldon (2016), Altieri (2016), Wolfendale (2014), O’Gorman (2013), and to a lesser extent Colebrook (2013).

    [3] For the full set of references here, see Bennett (2010), Hayles (2014 and 2016), and Hansen (2015).

    [4] While I cede that no thinker of “post-humanism” worth her philosophical salt would admit the possibility or even desirability of purging the sins of “correlationism” from critical thought all together, I cannot help but view such occasional posturing with a skeptical eye. For example, I find convincing Barbara Herrnstein-Smith’s recent essay “Scientizing the Humanities: Shifts, Negotiations, Collisions,” in which she compares the drive in contemporary critical theory to displace “the human” from humanistic inquiry to the impossible and equally incomprehensible task of overcoming the “‘astro’-centrism of astronomy or the biocentrism of biology” (359).

    [5] In “Modest Proposal for the Inhuman,” Julian Murphet identifies four interrelated strands of post- or inhumanist thought that combine a kind of metaphysical speculation with a full-blown demolition of traditional ontology’s conceptual foundations. They are: “(1) cosmic nihilism, (2) molecular bio-plasticity, (3) technical accelerationism, and (4) animality. These sometimes overlapping trends are severally engaged in the mortification of humankind’s stubborn pretensions to mastery over the domain of the intelligible and the knowable in an era of sentient machines, routine genetic modification, looming ecological disaster, and irrefutable evidence that we share 99 percent of our biological information with chimpanzees” (653).

    [6] The full quotation from Galloway’s essay reads: “Why, within the current renaissance of research in continental philosophy, is there a coincidence between the structure of ontological systems and the structure of the most highly evolved technologies of post-Fordist capitalism? [….] Why, in short, is there a coincidence between today’s ontologies and the software of big business?” (347). Digital Stockholm Syndrome begins by accepting Galloway’s provocations as descriptive instead of speculative. We do not necessarily wonder in 2017 if “there is a coincidence between today’s ontologies and the software of big business”; we now wonder instead how such a confluence came to be.

    [7] Wendy Hui Kyun Chun makes a similar point in her 2016 monograph Updating to Remain the Same: Habitual New Media. She writes, “If users now ‘curate’ their lives, it is because their bodies have become archives” (x-xi). While there is not ample space here to discuss the  full theoretical implications of her book, Chun’s discussion of the inherently gendered dimension to confession, self-curation as self-exposition, and online privacy as something that only the unexposed deserve (hence the need for preemptive confession and self-exposition on the internet) in digital/social media networks is tremendously relevant to Jarzombek’s Digital Stockholm Syndrome, as both texts consider the Self as a set of mutable and “marketable/governable/hackable categories” (Jarzombek 26) that are collected without our knowledge and subsequently fed back to the data/media user in the form of its own packaged and unique identity. For recent similar variations of this argument, see Simanowski (2017) and McNeill (2012).

    I also think Chun’s book offers a helpful tool for thinking through recent confessional memoirs or instances of “auto-theory” (fictionalized or not) like Maggie Nelson’s The Argonauts (2015), Sheila Heti’s How Should a Person Be (2010), Marie Calloway’s what purpose did i serve in your life (2013), and perhaps to a lesser degree Tao Lin’s Richard Yates (2010), Taipei (2013), Natasha Stagg’s Surveys, and Ben Lerner’s Leaving the Atocha Station (2011) and 10:04 (2014). The extent to which these texts’ varied formal-aesthetic techniques can be said to be motivated by political aims is very much up for debate, but nonetheless, I think it is fair to say that many of them revel in the reveal. That is to say, via confession or self-exposition, many of these novels enact the allegedly performative subversion of political power by documenting their protagonists’ and/or narrators’ certain social/political acts of transgression. Chun notes, however, that this strategy of self-revealing performs “resistance as a form of showing off and scandalizing, which thrives off moral outrage. This resistance also mimics power by out-spying, monitoring, watching, and bringing to light, that is, doxing” (151). The term “autotheory,” which was has been applied to Nelson’s The Argonauts in particular, takes on a very different meaning in this context. “Autotheory” can be considered as a theory of the self, or a self-theorization, or perhaps even the idea that personal experience is itself a kind of theory might apply here, too. I wonder, though, how its meaning would change if the prefix “auto” was understood within a media-theoretical framework not as “self” but as “automation.” “Autotheory” becomes, then, an automatization of theory or theoretical thinking, but also a theoretical automatization; or more to the point: what if “autotheory” describes instead a theorization of the Self or experience wherein “the self” is only legible as the product of automated computational-algorithmic processes?

    [8] Echoing the critiques of “correlationism” or “anthropocentrism” or what have you, Jarzombek declares that “The age of anthrocentrism is over” (32).

    [9] Whatever notion of (self)identity the Self might find to be most palatable today, Jarzombek argues, is inevitably mediated via global satellites. “The intermediaries are the satellites hovering above the planet. They are what make us global–what make me global” (1), and as such, they represent the “civilianization” of military technologies (4). What I am trying to suggest is that the concepts and categories of self-identity we work with today are derived from the informatic feedback we receive from long-standing military technologies.

    [10] Here Jarzombek seems to be suggesting that the “object” in the “objectivity” of “the social sciences” has been carelessly conflated with the “object” in “object-oriented” philosophy. The prioritization of all things “objective” in both philosophy and science has inadvertently produced this semantic and conceptual slippage. Data objects about the Self exist, and thus by existing, they determine what is objective about the Self. In this new formulation, what is objective about the Self or subject, in other words, is what can be verified as information about the self. In Indexing It All: The Subject in the Age of Documentation, Information, and Data (2014), Ronald Day argues that these global tracking technologies supplant traditional ontology’s “ideas or concepts of our human manner of being” and have in the process “subsume[d] and subvert[ed] the former roles of personal judgment and critique in personal and social beings and politics” (1). While such technologies might be said to obliterate “traditional” notions of subjectivity, judgment, and critique, Day demonstrates how this simultaneous feeding-forward and feeding back of data-about-the-Self represents the return of autoaffection, though in his formulation self-presence is defined as information or data-about-the-self whose authenticity is produced when it is fact-checked against a biographical database (3)—self-presence is a presencing of data-about-the-Self. This is all to say that the Self’s informational “aboutness”–its representation in and as data–comes to stand in for the Self’s identity, which can only be comprehended as “authentic” in its limited metaphysical capacity as a general informatic or documented “aboutness.”

    [11] Flusser is again instructive on this point, albeit in his own idiosyncratic way­­. Drawing attention to the strange unnatural plurality in the term “humanities,” he writes, “The American term humanities appropriately describes the essence of these disciplines. It underscores that the human being is an unnatural animal” (2002, 3). The plurality of “humanities,” as opposed to the singular “humanity,” constitutes for Flusser a disciplinary admission that not only the category of “the human” is unnatural, but that the study of such an unnatural thing is itself unnatural, as well. I think it is also worth pointing out that in the context of Flusser’s observation, we might begin to situate the rise of “the supplemental humanities” as an attempt to redefine the value of a humanities education. The spatial humanities, the energy humanities, medical humanities, the digital humanities, etc.—it is not difficult to see how these disciplinary off-shoots consider themselves as supplements to whatever it is they think “the humanities” are up to; regardless, their institutional injection into traditional humanistic discourse will undoubtedly improve both(sub)disciplines, with the tacit acknowledgment being that the latter has just a little more to gain from the former in terms of skills, technical know-how, and data management. Many thanks to Aaron Jaffe for bringing this point to my attention.

    [12] In his essay “Algorithmic Catastrophe—The Revenge of Contingency,” Yuk Hui notes that “the anticipation of catastrophe becomes a design principle” (125). Drawing from the work of Bernard Stiegler, Hui shows how the pharmacological dimension of “technics, which aims to overcome contingency, also generates accidents” (127). And so “as the anticipation of catastrophe becomes a design principle…it no longer plays the role it did with the laws of nature” (132). Simply put, by placing algorithmic catastrophe on par with a failure of reason qua the operations of mathematics, Hui demonstrates how “algorithms are open to contingency” only insofar as “contingency is equivalent to a causality, which can be logically and technically deduced” (136). To take Jarzombek’s example of the failing computer or what have you, while the blue screen of death might be understood to represent the faithful execution of its programmed commands, we should also keep in mind that the obverse of Jarzombek’s scenario would force us to come to grips with how the philosophical implications of the “shit happens” logic that underpins contingency-as-(absent) causality “accompanies and normalizes speculative aesthetics” (139).

    [13] I am reminded here of one of the six theses from the manifesto “What would a floating sheep map?,” jointly written by the Floating Sheep Collective, which is a cohort of geography professors. The fifth thesis reads: “Map or be mapped. But not everything can (or should) be mapped.” The Floating Sheep Collective raises in this section crucially important questions regarding ownership of data with regard to marginalized communities. Because it is not always clear when to map and when not to map, they decide that “with mapping squarely at the center of power struggles, perhaps it’s better that not everything be mapped.” If mapping technologies operate as ontological radars–the Self’s data points help point the Self towards its own ontological location in and as data—then it is fair to say that such operations are only philosophically coherent when they are understood to be framed within the parameters outlined by recent iterations of ontological thinking and its concomitant theoretical deflation of the rich conceptual make-up that constitutes the “the human.” You can map the human’s data points, but only insofar as you buy into the idea that points of data map the human. See http://manifesto.floatingsheep.org/.

    [14]Mind/paranoia: they are the same word!”(Jarzombek 71).

    _____

    Works Cited

    • Adler, Renata. Speedboat. New York Review of Books Press, 1976.
    • Altieri, Charles. “Are We Being Materialist Yet?” symplokē 24.1-2 (2016):241-57.
    • Calloway, Marie. what purpose did i serve in your life. Tyrant Books, 2013.
    • Chun, Wendy Hui Kyun. Updating to Remain the Same: Habitual New Media. The MIT Press, 2016.
    • Cohen, Joshua. Book of Numbers. Random House, 2015.
    • Cole, Andrew. “The Call of Things: A Critique of Object-Oriented Ontologies.” minnesota review 80 (2013): 106-118.
    • Colebrook, Claire. “Hypo-Hyper-Hapto-Neuro-Mysticism.” Parrhesia 18 (2013).
    • Day, Ronald. Indexing It All: The Subject in the Age of Documentation, Information, and Data. The MIT Press, 2014.
    • Floating Sheep Collective. “What would a floating sheep map?” http://manifesto.floatingsheep.org/.
    • Flusser, Vilém. Into the Universe of Technical Images. Translated by Nancy Ann Roth. University of Minnesota Press, 2011.
    • –––. The Surprising Phenomenon of Human Communication. 1975. Metaflux, 2016.
    • –––. Writings, edited by Andreas Ströhl. Translated by Erik Eisel. University of Minnesota Press, 2002.
    • Galloway, Alexander R. “The Poverty of Philosophy: Realism and Post-Fordism.” Critical Inquiry 39.2 (2013): 347-366.
    • Hansen, Mark B.N. Feed Forward: On the Future of Twenty-First Century Media. Duke University Press, 2015.
    • Hayles, N. Katherine. “Cognition Everywhere: The Rise of the Cognitive Nonconscious and the Costs of Consciousness.” New Literary History 45.2 (2014):199-220.
    • –––. “The Cognitive Nonconscious: Enlarging the Mind of the Humanities.” Critical Inquiry 42 (Summer 2016): 783-808.
    • Herrnstein-Smith, Barbara. “Scientizing the Humanities: Shifts, Collisions, Negotiations.” Common Knowledge  22.3 (2016):353-72.
    • Heti, Sheila. How Should a Person Be? Picador, 2010.
    • Hu, Tung-Hui. A Prehistory of the Cloud. The MIT Press, 2016.
    • Huehls, Mitchum. After Critique: Twenty-First Century Fiction in a Neoliberal Age. Oxford University Press, 2016.
    • Hui, Yuk. “Algorithmic Catastrophe–The Revenge of Contingency.” Parrhesia 23(2015): 122-43.
    • Jarzombek, Mark. Digital Stockholm Syndrome in the Post-Ontological Age. University of Minnesota Press, 2016.
    • Lin, Tao. Richard Yates. Melville House, 2010.
    • –––. Taipei. Vintage, 2013.
    • McNeill, Laurie. “There Is No ‘I’ in Network: Social Networking Sites and Posthuman Auto/Biography.” Biography 35.1 (2012): 65-82.
    • Murphet, Julian. “A Modest Proposal for the Inhuman.” Modernism/Modernity 23.3(2016): 651-70.
    • Nelson, Maggie. The Argonauts. Graywolf P, 2015.
    • O’Gorman, Marcel. “Speculative Realism in Chains: A Love Story.” Angelaki: Journal of the Theoretical Humanities 18.1 (2013): 31-43.
    • Rosenberg, Jordana. “The Molecularization of Sexuality: On Some Primitivisms of the Present.” Theory and Event 17.2 (2014):  n.p.
    • Sheldon, Rebekah. “Dark Correlationism: Mysticism, Magic, and the New Realisms.” symplokē 24.1-2 (2016): 137-53.
    • Simanowski, Roberto. “Instant Selves: Algorithmic Autobiographies on Social Network Sites.” New German Critique 44.1 (2017): 205-216.
    • Stagg, Natasha. Surveys. Semiotext(e), 2016.
    • Wolfendale, Peter. Object Oriented Philosophy: The Noumenon’s New Clothes. Urbanomic, 2014.
  • Alexander R. Galloway — Brometheanism

    Alexander R. Galloway — Brometheanism

    By Alexander R. Galloway
    ~

    In recent months I’ve remained quiet about the speculative turn, mostly because I’m reticent to rekindle the “Internet war” that broke out a couple of years ago mostly on blogs but also in various published papers. And while I’ve taught accelerationism in my recent graduate seminars, I opted for radio silence when accelerationism first appeared on the scene through the Accelerationist Manifesto, followed later by the book Inventing the Future. Truth is I have mixed feelings about accelerationism. Part of me wants to send “comradely greetings” to a team of well-meaning fellow Marxists and leave it at that. Lord knows the left needs to stick together. Likewise there’s little I can add that people like Steven Shaviro and McKenzie Wark haven’t already written, and articulated much better than I could. But at the same time a number of difficulties remain that are increasingly hard to overlook. To begin I might simply echo Wark’s original assessment of the Accelerationist Manifesto: two cheers for accelerationism, but only two!

    What’s good about accelerationism? And what’s bad? I love the ambition and scope. Certainly the accelerationists’ willingness to challenge leftist orthodoxies is refreshing. I also like how the accelerationists demand that we take technology and science seriously. And I also agree that there are important tactical uses of accelerationist or otherwise hypertrophic interventions (Eugene Thacker and I have referred to them as exploits). Still I see accelerationism essentially as a tactic mistaken for a strategy. At the same time this kind of accelerationism is precisely what dot-com entrepreneurs want to see from the left. Further, and ultimately most important, accelerationism is paternalistic and thus suffers from the problems of elitism and ultimately reactionary politics.

    Let me explain. I’ll talk first about Srnicek and Williams’ 2015 book Inventing the Future, and then address one of the central themes fueling the accelerationist juggernaut, Prometheanism. Well written, easy to read, and exhaustively footnoted, Inventing the Future is ostensibly a follow up to the Accelerationist Manifesto, although the themes of the two texts are different and they almost never mention accelerationism in the book. (Srnicek in particular is nothing if not shrewd and agile: present at the christening of #A, we also find him on the masthead of the speculative realist reader, and today nosing in on “platform studies.” Wherever he alights next will doubtless portend future significance.) The book is vaguely similar to Michael Hardt and Antonio Negri’s Declaration from 2012 in that it tries to assess the current condition of the left while also providing a set of specific steps to be taken for the future. And while the accelerationists have garnered significantly more attention of late, mostly because it feels so fresh and new, Hardt and Negri’s is the better book (and interestingly Srnicek and Williams never cite them).

    Inventing the Future

    Inventing the Future has essentially two themes. The first consists in a series of denunciations of what they call “folk politics” defined in terms of Occupy, the Zapatistas, Tiqqun, localism, and direct democracy, ostensibly in favor of a new “hegemony” of planetary social democracy (also known as Leninism). The second theme concerns an anti-work polemic focused on the universal basic income (UBI) and shortening the work week. Indeed even as these two authors collaborate and mix their thoughts, there seem to be two books mixed together into one. This produces an interesting irony: while the first half of the book unabashedly denigrates anarchism in favor of Leninism, the second half of the book focuses on that very theme (anti-work) that has defined anarchist theory since the split in the First International, if not since time immemorial.

    What’s so wrong with “folk politics”? There are a few ways to answer this question. First the accelerationists are clearly frustrated by the failures of the left, and rightly so, a left debilitated by “apathy, melancholy and defeat” (5). There’s a demographic explanation as well. This is the cri de coeur of a younger generation seeking to move beyond what are seen as the sclerotic failures of postmodern theory with all of its “culturalist” baggage (which too often is a codeword for punks, queers, women, and people of color — more on that in a moment).

    Folk politics includes “the fetishization of local spaces, immediate actions, transient gestures, and particularisms of all kinds” (3); it privileges the “small-scale, the authentic, the traditional and the natural” (10). The following virtues help fill out the definition:

    immediacy…tactics…inherently fleeting…the past…the voluntarist and spontaneous…the small…withdrawal or exit…the everyday…feeling…the particular…the ethical…the suffering of the particular and the authenticity of the local (10-11)

    Wow, that’s a lot of good stuff to get rid of. Still, they don’t quit there, targeting horizontalism of various kinds. Radical democracy is in the crosshairs too. Anti-representational politics is out as well. All the “from below” movements, from the undercommons to the black bloc, anything that smacks of “anarchism, council communism, libertarian communism and autonomism” (26) — it’s all under indictment. This unceasing polemic culminates in the book’s most potent sentence, if not also its most ridiculous, where the authors dismiss all of the following movements in one fell swoop:

    Occupy, Spain’s 15M, student occupations, left communist insurrectionists like Tiqqun and the Invisible Committee, most forms of horizontalism, the Zapatistas…localism…slow-food (11-12)

    That scoops up a lot of people. And the reader is left to quibble over whatever principal of decision might group all these disparate factions together. But the larger point is clear: for Srnicek and Williams folk politics emerged because of an outdated Left (i.e. the abject failures of social democracy and communism) (16-), and an outmaneuvered Left (i.e. the rampant successes of neoliberalism) (19-). Thus their goal is to update the left with a new ideology, and overhaul its infrastructure allowing it to modernize and scale up to the level of the planet.

    In the second half of the book, particularly in chapters 5 and 6, Srnicek and Williams elaborate their vision for anti-work and post-work. This hinges on the concept of full automation, and they provocatively assert that “the tendencies towards automation and the replacement of human labor should be enthusiastically accelerated” (109). Yet the details are scant. What kind of tech are we talking about? We get some vague references at the outset to “Open-source designs, copyleft creativity, and 3D printing” (1), then again later to “data collection (radio-frequency identification, big data)” and so on (110). But one thing this book does not provide is an examination of the technology of modern capitalism. (Srnicek’s Platform Capitalism is an improvement thematically but not substantively: he provides an analysis of political economy, but no tech audit.) Thus Inventing the Future has a sort of Wizard of Oz problem at its core. It’s not clear what clever devices are behind the curtain, we’re just supposed to assume that they will be sufficiently communistical if we all believe hard enough.

    At the same time the authors come across as rather tone deaf on the question of labor, bemoaning above all “the misery of not being exploited,” as if exploitation is some grand prize awarded to the subaltern. Further, they fail to address adequately the two key challenges of automation, both of which have been widely discussed in political and economic theory: first that automation eliminates jobs for people who very much want and need them, leading to surplus populations, unemployment, migration, and intrenched poverty; and second that automation transforms the organic composition of labor through deskilling and proletarianization, the offshoring of menial labor, and the introduction of technical and specialist labor required to design, build, operate, and repair those seemingly “automagical” machines. In other words, under automation some people work less, but everyone works differently. Automation reduces work for some, but changes (and in fact often increases) work for others. Marx’s analysis of machines in Capital is useful here, where he addresses all of these various tendencies, from the elimination of labor and the increase in labor, to the transformation of the organic composition of labor — the last point being the most significant. (And while machines might help lubricate and increase the productive forces — not a bad thing — it’s clear that machines are absolutely not revolutionary actors for Marx. Optimistic interpretations gleaned from the Grundrisse notwithstanding, Marx defines machines essentially as large batteries for value. I have yet to find any evidence that today’s machines are any different.)

    So the devil is in the details: what kind of technology are we talking about? But perhaps more importantly, if you get rid of the “folk,” aren’t you also getting rid of the people? Srnicek and Williams try to address this in chapter 8, although I’m more convinced by Hardt and Negri’s “multitude,” Harney and Moten’s “undercommons,” or even formulations like “the part of no part” or the “inoperative community” found scattered across a variety of other texts. By the end Srnicek and Williams out themselves as reticular pessimists: let’s not specify “the proper form of organization” (162), let’s just let it happen naturally in an “ecology of organizations” (163). The irony being that we’re back to square one, and these anti-folk evangelists are hippy ecologists after all. (The reference to function over form [169] appears as a weak afterthought to help rationalize their decision, but it re-introduces the problem of techno-fetishism, this time a fetishism of the function.)

    To summarize, accelerationism presents a rich spectrum of problems. The first stems from the notion that technology/automation will save us, replete with vague references to “the latest technological developments” unencumbered by any real details. Second is the question of capitalism itself. Despite the authors’ Marxist tendencies, it’s not at all clear that accelerationism is anti-capitalist. In fact accelerationism would be better described as a form of post-capitalism, what Zizek likes to mock as “capitalism with a friendly face.” What is post-capitalism exactly? More capitalism? A modified form of capitalism? For this reason it becomes difficult to untangle accelerationism from the most visionary dreams of the business elite. Isn’t this exactly what dot-com entrepreneurs are calling for? Isn’t the avant-garde of acceleration taking place right now in Silicon Valley? This leads to a third point: accelerationism is a tactic mistaken for a strategy. Certainly accelerationist or otherwise hypertrophic methods are useful in a provisional, local, which is to say tactical way. But accelerationism is, in my view, naïve about how capitalism works at a strategic level. Capitalism wants nothing more than to accelerate. Adding to the acceleration will help capitalism not hinder it. Capitalism is this accelerating force, from primitive accumulation on up to today. (Accelerationists don’t dispute this; they just simply disagree on the moral status of capitalism.) Fourth and finally is the most important problem revealed by accelerationism, the problem of elitism and reactionary politics. Given unequal technological development, those who accelerate will necessarily do so on the backs of others who are forced to proletarianize. Thus accelerationists are faced with a kind of “internal colonialism” problem, meaning there must be a distinction made between those who accelerate and those who facilitate acceleration through their very bodies. We already know who suffers most under unequal technological acceleration, and it’s not young white male academics living in England. Thus their skepticism toward the “folk” is all too often a paternalistic skepticism toward the wants and needs of the generic population. Hence the need for accelerationists to talk glowingly about things like “engineering consent.” It’s hard to see where this actually leads. Or more to the point who leads: if not Leninists then who, technocrats? Philosopher kings?

    *

    Accelerationism gains much inspiration from the philosophy of Prometheanism. If accelerationism provides a theory of political economy, Prometheanism supplies a theory of the subject. Yet it’s not always clear what people mean by this term. In a recent lecture titled “Prometheanism and Rationalism” Peter Wolfendale defines Prometheanism in such general terms that it becomes a synonym for any number of things: history and historical change; being against fatalism and messianism; being against the aristocracy; being against Fukuyama; being for feminism; the UBI and post-capitalism; the Enlightenment and secularism; deductive logic; overcoming (perceived) natural limits; technology; “automation” (which as I’ve just indicated is the most problematic concept of them all). Even very modest and narrow definitions of Prometheanism — technology for humans to overcome natural limit — present their own problems and wind up largely deflating the sloganeering of it all. “Okay so both the hydrogen bomb and the contraceptive pill are equally Promethean? So then who adjudicates their potential uses?” And we’re left with Prometheanism as the latest YAM philosophy (Yet Another Morality).

    Still, Prometheanism has a particular vision for itself and it’s worth describing the high points. I can think of six specific qualities. (1) Prometheanism defines itself as posthuman or otherwise antihuman. (2) Prometheanism is an attempt to transcend the bounds of physical limitation. (3) Prometheanism promotes freedom, as in for instance the freedom to change the body through hormone therapy. (4) Prometheanism sees itself as politically progressive. (5) Prometheanism sees itself as being technologically savvy. (6) Prometheanism proposes to offer technical solutions to real problems.

    But is any of this true? Interestingly Bernard Stiegler provided an answer to some of these questions already in 1994, and it’s worth returning to his book from that year Technics and Time, 1: The Fault of Epimetheus to fill out a conversation that has, thus far, been mostly one-sided. Stiegler’s book is long and complicated, and touches on many different things including technology and the increased rationalization of life, by way of some of Stiegler’s key influences including Gilbert Simondon, André Leroi-Gourhan, and Bertrand Gille. Let me focus however on the second part of the book, where Stiegler examines the two brothers Epimetheus and Prometheus.

    A myth about powers and qualities, the fable of Epimetheus and Prometheus is recounted by the sophist Protagoras starting at line 320c in Plato’s dialogue of that name. In Stiegler’s retelling of the story, we begin with Epimetheus, who, via a “principle of compensation” governed by notions of difference and equilibrium, hands out powers and qualities to all the animals of the Earth. For instance extra speed might be endowed to the gazelle, but only by way of balanced compensation given to another animal, say a boost in strength bestowed upon the lion. Seemingly diligent in his duties, Epimetheus nevertheless tires before the job is complete, shirking his duties before arriving at humankind, who is left relatively naked without a special power or quality of its own. To compensate humankind, Prometheus absconds with “the gift of skill in the arts and fire” — “τὴν ἔντεχνον σοφίαν σὺν πυρί” — captured from Athena and Hephaestus, respectively, conferring these two gifts to humanity (Plato, “Protagoras,” 321d).

    In this way humans are defined first not via technical supplement but through an elemental fault — this is Stiegler’s lingering poststructuralism — the fault of Epimetheus. Epimetheus forgets about us, leaving us until the end, and hence “Humans only occur through their being forgotten; they only appear in disappearing” (188). But it’s more than that: a fault followed by a theft, and hence a twin fault. Humanity is the “fruit of a double fault–an act of forgetting [by Epimetheus], then of theft [by Prometheus]” (188). Humans are thus a forgotten afterthought, remedied afterward by a lucky forethought.

    “Afterthought” and “forethought” — Stiegler means these terms quite literally. Who is Epimetheus? And who is Prometheus? Greek names often have etymological if not allegorical significance, as is the case here. Both names share the root “-metheus,” cognate with manthánō [μανθάνω], which means learning, study, or cultivation of knowledge. Hence a mathitís [μαθητής] is a learner or a student. (And in fact in a very literal sense “mathematics” simply refers to the things that one learns, not to arithmetic or geometry per se.) The two brothers are thus both varieties of learners, both varieties of thinkers. The key is which variety. The key is the Epi– and the Pro-.

    Epi carries the character of the accidentally and artificial factuality of something happening, arriving, a primordial ‘passibility,’” Stiegler explains. “Epimetheia means heritage. Heritage is always epimathesis. Epimetheia would also mean then tradition-originating in a fault that is always already there and that is nothing but technicity” (206-207). Hence Epimetheus means something like “learning on the basis of,” “thinking after,” or, more simply, or “afterthought” or “hindsight.” This is why Epimetheus forgets, why he is at fault, why he acts foolishly, because these are all the things that generate hindsight.

    Prometheus on the other hand is “foresight” or “fore-thought.” If Epimetheus means “thinking and learning on the basis of,” Prometheus means something more like “thinking and learning in anticipation of.” In this way, Prometheus comes to stand in for cleverness (but also theft), ingenuity, and thus technics as a whole.

    But is that all? Is the lesson simply to restore Epimetheus to his position next to Prometheus? To remember the Epimethean omission along with the Promethean endowment? In fact the old Greek myth isn’t quite finished, and, after initially overlooking the ending, Stiegler eventually broaches the closing section on Hermes. For even after benefiting from its Promethean supplement, humanity remains incomplete. Specifically, the gods notice that Man has a tendency toward war and political strife. Thus Hermes is tasked to implant a kind of socio-political virtue, supplementing humanity with “the qualities of respect for others [αἰδώ] and a sense of justice [δίκη]” (Plato 322c). In other words, a second supplement is necessary, only this time a supplement not rooted in the identitarian logic of heterogeneous qualities. “Another tekhnē is required,” writes Stiegler, “a tekhnē that is no longer paradoxically…the privilege of specialists” (201). This point about specialists is key — all you Leninists take note — because on Zeus’s command Hermes delivers respect and justice generically and equally across all persons, not via the “principle of compensation” based on difference and equilibrium used previously by Epimetheus to divvy up the powers and qualities of the animals. Thus while some people may have a talent for the piano, and others might be gifted in some other way, justice and respect are bestowed equally to all.

    This is why politics is always a question of the “hermeneutic community,” that is, the ad hoc translation and interpretation of real political dynamics; it comes from Hermes (201). At the same time politics also means “the community of those who have no community” because there is no adjudication of heterogenous qualities, no truth or law stipulated in advance, except for the very “conditions” of the political (those “hermeneutic conditions,” namely αἰδώ and δίκη, respect and justice).

    To summarize, the Promethean story has three moments, not one, and all three ought to be given full voice:

    1. Default of origin (being forgotten about by Epimetheus/Hindsight)
    2. Gaining technicity (fire and skills from Prometheus/Foresight)
    3. Revealing the generic (“respect for others and a sense of justice” from Hermes)

    This strikes me as a much better way to think about Prometheanism overall, better than the narrow definition of “using technology to overcome natural limits.” Recognizing all three moments, Prometheanism (if we can still call it that) entails not just technological advancement, but also insufficiency and failure, along with a political consciousness rooted in generic humanity.

    And now would be a good time to pass the baton over to the Xenofeminists, who make much better use of accelerationism than its original authors do. The Xenofeminist manifesto provides a more holistic picture of what might simply be called a “universalism from below” — yes, that very folk politics that Srnicek and Williams seek to suppress — doing justice not only to Prometheus, but to Epimetheus and Hermes as well:

    Xenofeminism understands that the viability of emancipatory abolitionist projects — the abolition of class, gender, and race — hinges on a profound reworking of the universal. The universal must be grasped as generic, which is to say, intersectional. Intersectionality is not the morcellation of collectives into a static fuzz of cross-referenced identities, but a political orientation that slices through every particular, refusing the crass pigeonholing of bodies. This is not a universal that can be imposed from above, but built from the bottom up — or, better, laterally, opening new lines of transit across an uneven landscape. This non-absolute, generic universality must guard against the facile tendency of conflation with bloated, unmarked particulars — namely Eurocentric universalism — whereby the male is mistaken for the sexless, the white for raceless, the cis for the real, and so on. Absent such a universal, the abolition of class will remain a bourgeois fantasy, the abolition of race will remain a tacit white-supremacism, and the abolition of gender will remain a thinly veiled misogyny, even — especially — when prosecuted by avowed feminists themselves. (The absurd and reckless spectacle of so many self-proclaimed ‘gender abolitionists’ campaign against trans women is proof enough of this). (0x0F)


    _____

    Alexander R. Galloway is a writer and computer programmer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay