b2o: boundary 2 online

Tag: digital culture

  • Men (Still) Explain Technology to Me: Gender and Education Technology

    Men (Still) Explain Technology to Me: Gender and Education Technology

    By Audrey Watters
    ~

    Late last year, I gave a similarly titled talk—“Men Explain Technology to Me”—at the University of Mary Washington. (I should note here that the slides for that talk were based on a couple of blog posts by Mallory Ortberg that I found particularly funny, “Women Listening to Men in Art History” and “Western Art History: 500 Years of Women Ignoring Men.” I wanted to do something similar with my slides today: find historical photos of men explaining computers to women. Mostly I found pictures of men or women working separately, working in isolation. Mostly pictures of men and computers.)

    Men Explain Technology

    So that University of Mary Washington talk: It was the last talk I delivered in 2014, and I did so with a sigh of relief, but also more than a twinge of frightened nausea—nausea that wasn’t nerves from speaking in public. I’d had more than a year full of public speaking under my belt—exhausting enough as I always try to write new talks for each event, but a year that had become complicated quite frighteningly in part by an ongoing campaign of harassment against women on the Internet, particularly those who worked in video game development.

    Known as “GamerGate,” this campaign had reached a crescendo of sorts in the lead-up to my talk at UMW, some of its hate aimed at me because I’d written about the subject, demanding that those in ed-tech pay attention and speak out. So no surprise, all this colored how I shaped that talk about gender and education technology, because, of course, my gender shapes how I experience working in and working with education technology. As I discussed then at the University of Mary Washington, I have been on the receiving end of threats and harassment for stories I’ve written about ed-tech—almost all the women I know who have a significant online profile have in some form or another experienced something similar. According to a Pew Research survey last year, one in 5 Internet users reports being harassed online. But GamerGate felt—feels—particularly unhinged. The death threats to Anita Sarkeesian, Zoe Quinn, Brianna Wu, and others were—are—particularly real.

    I don’t really want to rehash all of that here today, particularly my experiences being on the receiving end of the harassment; I really don’t. You can read a copy of that talk from last November on my website. I will say this: GamerGate supporters continue to argue that their efforts are really about “ethics in journalism” not about misogyny, but it’s quite apparent that they have sought to terrorize feminists and chase women game developers out of the industry. Insisting that video games and video game culture retain a certain puerile machismo, GamerGate supporters often chastise those who seek to change the content of videos games, change the culture to reflect the actual demographics of video game players. After all, a recent industry survey found women 18 and older represent a significantly greater portion of the game-playing population (36%) than boys age 18 or younger (17%). Just over half of all games are men (52%); that means just under half are women. Yet those who want video games to reflect these demographics are dismissed by GamerGate as “social justice warriors.” Dismissed. Harassed. Shouted down. Chased out.

    And yes, more mildly perhaps, the verb that grew out of Rebecca Solnit’s wonderful essay “Men Explain Things to Me” and the inspiration for the title to this talk, mansplained.

    Solnit first wrote that essay back in 2008 to describe her experiences as an author—and as such, an expert on certain subjects—whereby men would presume she was in need of their enlightenment and information—in her words “in some sort of obscene impregnation metaphor, an empty vessel to be filled with their wisdom and knowledge.” She related several incidents in which men explained to her topics on which she’d published books. She knew things, but the presumption was that she was uninformed. Since her essay was first published the term “mansplaining” has become quite ubiquitous, used to describe the particular online version of this—of men explaining things to women.

    I experience this a lot. And while the threats and harassment in my case are rare but debilitating, the mansplaining is more insidious. It is overpowering in a different way. “Mansplaining” is a micro-aggression, a practice of undermining women’s intelligence, their contributions, their voice, their experiences, their knowledge, their expertise; and frankly once these pile up, these mansplaining micro-aggressions, they undermine women’s feelings of self-worth. Women begin to doubt what they know, doubt what they’ve experienced. And then, in turn, women decide not to say anything, not to speak.

    I speak from experience. On Twitter, I have almost 28,000 followers, most of whom follow me, I’d wager, because from time to time I say smart things about education technology. Yet regularly, men—strangers, typically, but not always—jump into my “@-mentions” to explain education technology to me. To explain open source licenses or open data or open education or MOOCs to me. Men explain learning management systems to me. Men explain the history of education technology to me. Men explain privacy and education data to me. Men explain venture capital funding of education startups to me. Men explain the business of education technology to me. Men explain blogging and journalism and writing to me. Men explain online harassment to me.

    The problem isn’t just that men explain technology to me. It isn’t just that a handful of men explain technology to the rest of us. It’s that this explanation tends to foreclose questions we might have about the shape of things. We can’t ask because if we show the slightest intellectual vulnerability, our questions—we ourselves—lose a sort of validity.

    Yet we are living in a moment, I would contend, when we must ask better questions of technology. We neglect to do so at our own peril.

    Last year when I gave my talk on gender and education technology, I was particularly frustrated by the mansplaining to be sure, but I was also frustrated that those of us who work in the field had remained silent about GamerGate, and more broadly about all sorts of issues relating to equity and social justice. Of course, I do know firsthand that it can difficult if not dangerous to speak out, to talk critically and write critically about GamerGate, for example. But refusing to look at some of the most egregious acts easily means often ignoring some of the more subtle ways in which marginalized voices are made to feel uncomfortable, unwelcome online. Because GamerGate is really just one manifestation of deeper issues—structural issues—with society, culture, technology. It’s wrong to focus on just a few individual bad actors or on a terrible Twitter hashtag and ignore the systemic problems. We must consider who else is being chased out and silenced, not simply from the video game industry but from the technology industry and a technological world writ large.

    I know I have to come right out and say it, because very few people in education technology will: there is a problem with computers. Culturally. Ideologically. There’s a problem with the internet. Largely designed by men from the developed world, it is built for men of the developed world. Men of science. Men of industry. Military men. Venture capitalists. Despite all the hype and hope about revolution and access and opportunity that these new technologies will provide us, they do not negate hierarchy, history, privilege, power. They reflect those. They channel it. They concentrate it, in new ways and in old.

    I want us to consider these bodies, their ideologies and how all of this shapes not only how we experience technology but how it gets designed and developed as well.

    There’s that very famous New Yorker cartoon: “On the internet, nobody knows you’re a dog.” The cartoon was first published in 1993, and it demonstrates this sense that we have long had that the Internet offers privacy and anonymity, that we can experiment with identities online in ways that are severed from our bodies, from our material selves and that, potentially at least, the internet can allow online participation for those denied it offline.

    Perhaps, yes.

    But sometimes when folks on the internet discover “you’re a dog,” they do everything in their power to put you back in your place, to remind you of your body. To punish you for being there. To hurt you. To threaten you. To destroy you. Online and offline.

    Neither the internet nor computer technology writ large are places where we can escape the materiality of our physical worlds—bodies, institutions, systems—as much as that New Yorker cartoon joked that we might. In fact, I want to argue quite the opposite: that computer and Internet technologies actually re-inscribe our material bodies, the power and the ideology of gender and race and sexual identity and national identity. They purport to be ideology-free and identity-less, but they are not. If identity is unmarked it’s because there’s a presumption of maleness, whiteness, and perhaps even a certain California-ness. As my friend Tressie McMillan Cottom writes, in ed-tech we’re all supposed to be “roaming autodidacts”: happy with school, happy with learning, happy and capable and motivated and well-networked, with functioning computers and WiFi that works.

    By and large, all of this reflects who is driving the conversation about, if not the development of these technology. Who is seen as building technologies. Who some think should build them; who some think have always built them.

    And that right there is already a process of erasure, a different sort of mansplaining one might say.

    Last year, when Walter Isaacson was doing the publicity circuit for his latest book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution (2014), he’d often relate of how his teenage daughter had written an essay about Ada Lovelace, a figure whom Isaacson admitted that he’d never heard of before. Sure, he’d written biographies of Steve Jobs and Albert Einstein and Benjamin Franklin and other important male figures in science and technology, but the name and the contributions of this woman were entirely unknown to him. Ada Lovelace, daughter of Lord Byron and the woman whose notes on Charles Babbage’s proto-computer the Analytical Engine are now recognized as making her the world’s first computer programmer. Ada Lovelace, the author of the world’s first computer algorithm. Ada Lovelace, the person at the very beginning of the field of computer science.

    Ada Lovelace
    Augusta Ada King, Countess of Lovelace, now popularly known as Ada Lovelace, in a painting by Alfred Edward Chalon (image source: Wikipedia)

    “Ada Lovelace defined the digital age,” Isaacson said in an interview with The New York Times. “Yet she, along with all these other women, was ignored or forgotten.” (Actually, the world has been celebrating Ada Lovelace Day since 2009.)

    Isaacson’s book describes Lovelace like this: “Ada was never the great mathematician that her canonizers claim…” and “Ada believed she possessed special, even supernatural abilities, what she called ‘an intuitive perception of hidden things.’ Her exalted view of her talents led her to pursue aspirations that were unusual for an aristocratic woman and mother in the early Victorian age.” The implication: she was a bit of an interloper.

    A few other women populate Isaacson’s The Innovators: Grace Hopper, who invented the first computer compiler and who developed the programming language COBOL. Isaacson describes her as “spunky,” not an adjective that I imagine would be applied to a male engineer. He also talks about the six women who helped program the ENIAC computer, the first electronic general-purpose computer. Their names, because we need to say these things out loud more often: Jean Jennings, Marilyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, Kay McNulty. (I say that having visited Bletchley Park where civilian women’s involvement has been erased, as they were forbidden, thanks to classified government secrets, from talking about their involvement in the cryptography and computing efforts there).

    In the end, it’s hard not to read Isaacson’s book without coming away thinking that, other than a few notable exceptions, the history of computing is the history of men, white men. The book mentions education Seymour Papert in passing, for example, but assigns the development of Logo, a programming language for children, to him alone. No mention of the others involved: Daniel Bobrow, Wally Feurzeig, and Cynthia Solomon.

    Even a book that purports to reintroduce the contributions of those forgotten “innovators,” that says it wants to complicate the story of a few male inventors of technology by looking at collaborators and groups, still in the end tells a story that ignores if not undermines women. Men explain the history of computing, if you will. As such it tells a story too that depicts and reflects a culture that doesn’t simply forget but systematically alienates women. Women are a rediscovery project, always having to be reintroduced, found, rescued. There’s been very little reflection upon that fact—in Isaacson’s book or in the tech industry writ large.

    This matters not just for the history of technology but for technology today. And it matters for ed-tech as well. (Unless otherwise noted, the following data comes from diversity self-reports issued by the companies in 2014.)

    • Currently, fewer than 20% of computer science degrees in the US are awarded to women. (I don’t know if it’s different in the UK.) It’s a number that’s actually fallen over the past few decades from a high in 1983 of 37%. Computer science is the only field in science, engineering, and mathematics in which the number of women receiving bachelor’s degrees has fallen in recent years. And when it comes to the employment not just the education of women in the tech sector, the statistics are not much better. (source: NPR)
    • 70% of Google employees are male. 61% are white and 30% Asian. Of Google’s “technical” employees. 83% are male. 60% of those are white and 34% are Asian.
    • 70% of Apple employees are male. 55% are white and 15% are Asian. 80% of Apple’s “technical” employees are male.
    • 69% of Facebook employees are male. 57% are white and 34% are Asian. 85% of Facebook’s “technical” employees are male.
    • 70% of Twitter employees are male. 59% are white and 29% are Asian. 90% of Twitter’s “technical” employees are male.
    • Only 2.7% of startups that received venture capital funding between 2011 and 2013 had women CEOs, according to one survey.
    • And of course, Silicon Valley was recently embroiled in the middle of a sexual discrimination trial involving the storied VC firm Kleiner, Smith, Perkins, and Caulfield filed by former executive Ellen Pao who claimed that men at the firm were paid more and promoted more easily than women. Welcome neither as investors nor entrepreneurs nor engineers, it’s hardly a surprise that, as The Los Angeles Times recently reported, women are leaving the tech industry “in droves.”

    This doesn’t just matter because computer science leads to “good jobs” or that tech startups lead to “good money.” It matters because the tech sector has an increasingly powerful reach in how we live and work and communicate and learn. It matters ideologically. If the tech sector drives out women, if it excludes people of color, that matters for jobs, sure. But it matters in terms of the projects undertaken, the problems tackled, the “solutions” designed and developed.

    So it’s probably worth asking what the demographics look like for education technology companies. What percentage of those building ed-tech software are men, for example? What percentage are white? What percentage of ed-tech startup engineers are men? Across the field, what percentage of education technologists—instructional designers, campus IT, sysadmins, CTOs, CIOs—are men? What percentage of “education technology leaders” are men? What percentage of education technology consultants? What percentage of those on the education technology speaking circuit? What percentage of those developing not just implementing these tools?

    And how do these bodies shape what gets built? How do they shape how the “problem” of education gets “fixed”? How do privileges, ideologies, expectations, values get hard-coded into ed-tech? I’d argue that they do in ways that are both subtle and overt.

    That word “privilege,” for example, has an interesting dual meaning. We use it to refer to the advantages that are are afforded to some people and not to others: male privilege, white privilege. But when it comes to tech, we make that advantage explicit. We actually embed that status into the software’s processes. “Privileges” in tech refer to whomever has the ability to use or control certain features of a piece of software. Administrator privileges. Teacher privileges. (Students rarely have privileges in ed-tech. Food for thought.)

    Or take how discussion forums operate. Discussion forums, now quite common in ed-tech tools—in learning management systems (VLEs as you call them), in MOOCs, for example—often trace their history back to the earliest Internet bulletin boards. But even before then, education technologies like PLATO, a programmed instruction system built by the University of Illinois in the 1970s, offered chat and messaging functionality. (How education technology’s contributions to tech are erased from tech history is, alas, a different talk.)

    One of the new features that many discussion forums boast: the ability to vote up or vote down certain topics. Ostensibly this means that “the best” ideas surface to the top—the best ideas, the best questions, the best answers. What it means in practice often is something else entirely. In part this is because the voting power on these sites is concentrated in the hands of the few, the most active, the most engaged. And no surprise, “the few” here is overwhelmingly male. Reddit, which calls itself “the front page of the Internet” and is the model for this sort of voting process, is roughly 84% male. I’m not sure that MOOCs, who’ve adopted Reddit’s model of voting on comments, can boast a much better ratio of male to female participation.

    What happens when the most important topics—based on up-voting—are decided by a small group? As D. A. Banks has written about this issue,

    Sites like Reddit will remain structurally incapable of producing non-hegemonic content because the “crowd” is still subject to structural oppression. You might choose to stay within the safe confines of your familiar subreddit, but the site as a whole will never feel like yours. The site promotes mundanity and repetition over experimentation and diversity by presenting the user with a too-accurate picture of what appeals to the entrenched user base. As long as the “wisdom of the crowds” is treated as colorblind and gender neutral, the white guy is always going to be the loudest.

    How much does education technology treat its users similarly? Whose questions surface to the top of discussion forums in the LMS (the VLE), in the MOOC? Who is the loudest? Who is explaining things in MOOC forums?

    Ironically—bitterly ironically, I’d say, many pieces of software today increasingly promise “personalization,” but in reality, they present us with a very restricted, restrictive set of choices of who we “can be” and how we can interact, both with our own data and content and with other people. Gender, for example, is often a drop down menu where one can choose either “male” or “female.” Software might ask for a first and last name, something that is complicated if you have multiple family names (as some Spanish-speaking people do) or your family name is your first name (as names in China are ordered). Your name is presented how the software engineers and designers deemed fit: sometimes first name, sometimes title and last name, typically with a profile picture. Changing your username—after marriage or divorce, for example—is often incredibly challenging, if not impossible.

    You get to interact with others, similarly, based on the processes that the engineers have determined and designed. On Twitter, you cannot direct message people, for example, that do not follow you. All interactions must be 140 characters or less.

    This restriction of the presentation and performance of one’s identity online is what “cyborg anthropologist” Amber Case calls the “templated self.” She defines this as “a self or identity that is produced through various participation architectures, the act of producing a virtual or digital representation of self by filling out a user interface with personal information.”

    Case provides some examples of templated selves:

    Facebook and Twitter are examples of the templated self. The shape of a space affects how one can move, what one does and how one interacts with someone else. It also defines how influential and what constraints there are to that identity. A more flexible, but still templated space is WordPress. A hand-built site is much less templated, as one is free to fully create their digital self in any way possible. Those in Second Life play with and modify templated selves into increasingly unique online identities. MySpace pages are templates, but the lack of constraints can lead to spaces that are considered irritating to others.

    As we—all of us, but particularly teachers and students—move to spend more and more time and effort performing our identities online, being forced to use preordained templates constrains us, rather than—as we have often been told about the Internet—lets us be anyone or say anything online. On the Internet no one knows you’re a dog unless the signup process demanded you give proof of your breed. This seems particularly important to keep in mind when we think about students’ identity development. How are their identities being templated?

    While Case’s examples point to mostly “social” technologies, education technologies are also “participation architectures.” Similarly they produce and restrict a digital representation of the learner’s self.

    Who is building the template? Who is engineering the template? Who is there to demand the template be cracked open? What will the template look like if we’ve chased women and people of color out of programming?

    It’s far too simplistic to say “everyone learn to code” is the best response to the questions I’ve raised here. “Change the ratio.” “Fix the leaky pipeline.” Nonetheless, I’m speaking to a group of educators here. I’m probably supposed to say something about what we can do, right, to make ed-tech more just not just condemn the narratives that lead us down a path that makes ed-tech less son. What we can do to resist all this hard-coding? What we can do to subvert that hard-coding? What we can do to make technologies that our students—all our students, all of us—can wield? What we can do to make sure that when we say “your assignment involves the Internet” that we haven’t triggered half the class with fears of abuse, harassment, exposure, rape, death? What can we do to make sure that when we ask our students to discuss things online, that the very infrastructure of the technology that we use privileges certain voices in certain ways?

    The answer can’t simply be to tell women to not use their real name online, although as someone who started her career blogging under a pseudonym, I do sometimes miss those days. But if part of the argument for participating in the open Web is that students and educators are building a digital portfolio, are building a professional network, are contributing to scholarship, then we have to really think about whether or not promoting pseudonyms is a sufficient or an equitable solution.

    The answer can’t be simply be “don’t blog on the open Web.” Or “keep everything inside the ‘safety’ of the walled garden, the learning management system.” If nothing else, this presumes that what happens inside siloed, online spaces is necessarily “safe.” I know I’ve seen plenty of horrible behavior on closed forums, for example, from professors and students alike. I’ve seen heavy-handed moderation, where marginalized voices find their input are deleted. I’ve seen zero-moderation, where marginalized voices are mobbed. We recently learned, for example, that Walter Lewin, emeritus professor at MIT, one of the original rockstar professors of YouTube—millions have watched the demonstrations from his physics lectures, has been accused of sexually harassing women in his edX MOOC.

    The answer can’t simply be “just don’t read the comments.” I would say that it might be worth rethinking “comments” on student blogs altogether—or rather the expectation that they host them, moderate them, respond to them. See, if we give students the opportunity to “own their own domain,” to have their own websites, their own space on the Web, we really shouldn’t require them to let anyone that can create a user account into that space. It’s perfectly acceptable to say to someone who wants to comment on a blog post, “Respond on your own site. Link to me. But I am under no obligation to host your thoughts in my domain.”

    And see, that starts to hint at what I think the answer here to this question about the unpleasantness—by design—of technology. It starts to get at what any sort of “solution” or “alternative” has to look like: it has to be both social and technical. It also needs to recognize there’s a history that might help us understand what’s done now and why. If, as I’ve argued, the current shape of education technologies has been shaped by certain ideologies and certain bodies, we should recognize that we aren’t stuck with those. We don’t have to “do” tech as it’s been done in the last few years or decades. We can design differently. We can design around. We can use differently. We can use around.

    One interesting example of this dual approach that combines both social and technical—outside the realm of ed-tech, I recognize—are the tools that Twitter users have built in order to address harassment on the platform. Having grown weary of Twitter’s refusal to address the ways in which it is utilized to harass people (remember, its engineering team is 90% male), a group of feminist developers wrote The Block Bot, an application that lets you block, en masse, a large list of Twitter accounts who are known for being serial harassers. That list of blocked accounts is updated and maintained collaboratively. Similarly, Block Together lets users subscribe to others’ block lists. Good Game Autoblocker, a tool that blocks the “ringleaders” of GamerGate.

    That gets, just a bit, at what I think we can do in order to make education technology habitable, sustainable, and healthy. We have to rethink the technology. And not simply as some nostalgia for a “Web we lost,” for example, but as a move forward to a Web we’ve yet to ever see. It isn’t simply, as Isaacson would posit it, rediscovering innovators that have been erased, it’s about rethinking how these erasures happen all throughout technology’s history and continue today—not just in storytelling, but in code.

    Educators should want ed-tech that is inclusive and equitable. Perhaps education needs reminding of this: we don’t have to adopt tools that serve business goals or administrative purposes, particularly when they are to the detriment of scholarship and/or student agency—technologies that surveil and control and restrict, for example, under the guise of “safety”—that gets trotted out from time to time—but that have never ever been about students’ needs at all. We don’t have to accept that technology needs to extract value from us. We don’t have to accept that technology puts us at risk. We don’t have to accept that the architecture, the infrastructure of these tools make it easy for harassment to occur without any consequences. We can build different and better technologies. And we can build them with and for communities, communities of scholars and communities of learners. We don’t have to be paternalistic as we do so. We don’t have to “protect students from the Internet,” and rehash all the arguments about stranger danger and predators and pedophiles. But we should recognize that if we want education to be online, if we want education to be immersed in technologies, information, and networks, that we can’t really throw students out there alone. We need to be braver and more compassionate and we need to build that into ed-tech. Like Blockbot or Block Together, this should be a collaborative effort, one that blends our cultural values with technology we build.

    Because here’s the thing. The answer to all of this—to harassment online, to the male domination of the technology industry, the Silicon Valley domination of ed-tech—is not silence. And the answer is not to let our concerns be explained away. That is after all, as Rebecca Solnit reminds us, one of the goals of mansplaining: to get us to cower, to hesitate, to doubt ourselves and our stories and our needs, to step back, to shut up. Now more than ever, I think we need to be louder and clearer about what we want education technology to do—for us and with us, not simply to us.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • A Dark, Warped Reflection

    A Dark, Warped Reflection

    Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )a review of Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )
    by Zachary Loeb
    ~

    Depending upon which sections of the newspaper one reads, it is very easy to come away with two rather conflicting views of the future. If one begins the day by reading the headlines in the “International News” or “Environment” it is easy to feel overwhelmed by a sense of anxiety and impending doom; however, if one instead reads the sections devoted to “Business” or “Technology” it is easy to feel confident that there are brighter days ahead. We are promised that soon we shall live in wondrous “Smart” homes where all of our devices work together tirelessly to ensure our every need is met even while drones deliver our every desire even as we enjoy ever more immersive entertainment experiences with all of this providing plenty of wondrous investment opportunities…unless of course another economic collapse or climate change should spoil these fantasies. Though the juxtaposition between newspaper sections can be jarring an element of anxiety can generally be detected from one section to the next – even within the “technology” pages. After all, our devices may have filled our hours with apps and social networking sites, but this does not necessarily mean that they have left us more fulfilled. We have been supplied with all manner of answers, but this does not necessarily mean we had first asked any questions.

    [youtube https://www.youtube.com/watch?v=pimqGkBT6Ek&w=560&h=315]

    If you could remember everything, would you want to? If a cartoon bear lampooned the pointlessness of elections, would you vote for the bear? Would you participate in psychological torture, if the person being tortured was a criminal? What lengths would you turn to if you could not move-on from a loved one’s death? These are the types of questions posed by the British television program Black Mirror, wherein anxiety about the technologically riddled future, be it the far future or next week, is the core concern. The paranoid pessimism of this science-fiction anthology program is not a result of a fear of the other or of panic at the prospect of nuclear annihilation – but is instead shaped by nervousness at the way we have become strangers to ourselves. There are no alien invaders, occult phenomena, nor is there a suit wearing narrator who makes sure that the viewers understand the moral of each story. Instead what Black Mirror presents is dread – it holds up a “black mirror” (think of any electronic device when the power on the screen is off) to society and refuses to flinch at the reflection.

    Granted, this does not mean that those viewing the program will not flinch.

    [And Now A Brief Digression]

    Before this analysis goes any further it seems worthwhile to pause and make a few things clear. Firstly, and perhaps most importantly, the intention here is not to pass a definitive judgment on the quality of Black Mirror. While there are certainly arguments that can be made regarding how “this episode was better than that one” – this is not the concern here. Nor for that matter is the goal to scoff derisively at Black Mirror and simply dismiss of it – the episodes are well written, interestingly directed, and strongly acted. Indeed, that the program can lead to discussion and introspection is perhaps the highest praise that one can bestow upon a piece of widely disseminated popular culture. Secondly, and perhaps even more importantly (depending on your opinion), some of the episodes of Black Mirror rely upon twists and surprises in order to have their full impact upon the viewer. Oftentimes people find it highly frustrating to have these moments revealed to them ahead of time, and thus – in the name of fairness – let this serve as an official “spoiler warning.” The plots of each episode will not be discussed in minute detail in what follows – as the intent here is to consider broader themes and problems – but if you hate “spoilers” you should consider yourself warned.

    [Digression Ends]

    The problem posed by Black Mirror is that in building nervous narratives about the technological tomorrow the program winds up replicating many of the shortcomings of contemporary discussions around technology. Shortcomings that make such an unpleasant future seem all the more plausible. While Black Mirror may resist the obvious morality plays of a show like The Twilight Zone, the moral of the episodes may be far less oppositional than they at first seem. The program draws much of its emotional heft by narrowly focusing its stories upon specific individuals, but in so doing the show may function as a sort of precognitive “usage manual,” one that advises “if a day should arrive when you can technologically remember everything…don’t be like the guy in this episode.” The episodes of Black Mirror may call upon viewers to look askance at the future it portrays, but it also encourages the sort of droll inured acceptance that is characteristic of the people in each episode of the program. Black Mirror is a sleek, hip, piece of entertainment, another installment in the contemporary “golden age of television” wherein it risks becoming just another program that can be streamed onto any of a person’s black mirror like screens. The program is itself very much a part of the same culture industry of the YouTube and Twitter era that the show seems to vilify – it is ready made for “binge watching.” The program may be disturbing, but its indictments are soft – allowing viewers a distance that permits them to say aloud “I would never do that” even as they are subconsciously unsure.

    Thus, Black Mirror appears as a sort of tragic confirmation of the continuing validity of Jacques Ellul’s comment:

    “One cannot but marvel at an organization which provides the antidote as it distills the poison.” (Ellul, 378)

    For the tales that are spun out in horrifying (or at least discomforting) detail on Black Mirror may appear to be a salve for contemporary society’s technological trajectory – but the show is also a ready made product for the very age that it is critiquing. A salve that does not solve anything, a cultural shock absorber that allows viewers to endure the next wave of shocks. It is a program that demands viewers break away from their attachment to their black mirrors even as it encourages them to watch another episode of Black Mirror. This is not to claim that the show lacks value as a critique; however, the show is less a radical indictment than some may be tempted to give it credit for being. The discomfort people experience while watching the show easily becomes a masochistic penance that allows people to continue walking down the path to the futures outlined in the show. Black Mirror provides the antidote, but it also distills the poison.

    That, however, may be the point.

    [Interrogation 1: Who Bears Responsibility?]

    Technology is, of course, everywhere in Black Mirror – in many episodes it as much of a character as the humans who are trying to come to terms with what the particular device means. In some episodes (“The National Anthem” or “The Waldo Moment”) the technologies that feature prominently are those that would be quite familiar to contemporary viewers: social media platforms like YouTube, Twitter, Facebook and the like. Whilst in other episodes (“The Complete History of You,” “White Bear” and “Be Right Back”) the technologies on display are new and different: an implantable device that records (and can play back) all of one’s memories, something that can induce temporary amnesia, a company that has developed a being that is an impressive mix of robotics and cloning. The stories that are told in Black Mirror, as was mentioned earlier, focus largely on the tales of individuals – “Be Right Back” is primarily about one person’s grief – and though this is a powerful story-telling device (and lest there be any confusion – many of these are very powerfully told stories) one of the questions that lingers unanswered in the background of many of these episodes is: who is behind these technologies?

    In fairness, Black Mirror would likely lose some of its effectiveness in terms of impact if it were to delve deeply into this question. If “The Complete History of You” provided a sci-fi faux-documentary foray into the company that had produced the memory recording “grains” it would probably not have felt as disturbing as the tale of abuse, sex, violence and obsession that the episode actually presents. Similarly, the piece of science-fiction grade technology upon which “White Bear” relies, functions well in the episode precisely because the key device makes only a rather brief appearance. And yet here an interesting contrast emerges between the episodes set in, or closely around, the present and those that are set further down the timeline – for in the episodes that rely on platforms like YouTube, the viewer technically knows who the interests are behind the various platforms. The episode “The Complete History of You” may be intensely disturbing, but what company was it that developed and brought the “grains” to market? What biotechnology firm supplies the grieving spouse in “Be Right Back” with the robotic/clone of her deceased husband? Who gathers the information from these devices? Where does that information live? Who is profiting? These are important questions that go unanswered, largely because they go unasked.

    Of course, it can be simple to disregard these questions. Dwelling upon them certainly does take something away from the individual episodes and such focus diminishes the entertainment quality of Black Mirror. This is fundamentally why it is so essential to insist that these critical questions be asked. The worlds depicted in episodes of Black Mirror did not “just happen” but are instead a result of layers upon layers of decisions and choices that have wound up shaping these characters lives – and it is questionable how much say any of these characters had in these decisions. This is shown in stark relief in “The National Anthem” in which a befuddled prime minister cannot come to grips with the way that a threat uploaded to YouTube along with shifts in public opinion, as reflected on Twitter, has come to require him to commit a grotesque act; his despair at what he is being compelled to do is a reflection of the new world of politics created by social media. In some ways it is tempting to treat episodes like “The Complete History of You” and “Be Right Back” as retorts to an unflagging adoration for “innovation,” “disruption,” and “permissionless innovation” – for the episodes can be read as a warning that just because we can record and remember everything, does not necessarily mean that we should. And yet the presence of such a cultural warning does not mean that such devices will not eventually be brought to market. The denizens of the worlds of Black Mirror are depicted as being at the mercy of the technological current.

    Thus, and here is where the problem truly emerges, the episodes can be treated as simple warnings that state “well, don’t be like this person.” After all, the world of “The Complete History of You” seems to be filled with people who – unlike the obsessive main character – can use the “grain” productively; on a similar note it can be easy to imagine many people pointing to “Be Right Back” and saying that the idea of a robotic/clone could be wonderful – just don’t use it to replicate the recently dead; and of course any criticism of social media in “The Waldo Moment” or “The National Anthem” can be met with a retort regarding a blossoming of free expression and the ways in which such platforms can help bolster new protest movements. And yet, similar to the sad protagonist in the film Her, the characters in the story lines of Black Mirror rarely appear as active agents in relation to technology even when they are depicted as truly “choosing” a given device. Rather they have simply been reduced to consumers – whether they are consumers of social media, political campaigns, or an amusement park where the “show” is a person being psychologically tortured day after day.

    This is not to claim that there should be an Apple or Google logo prominently displayed on the “grain” or on the side of the stationary bikes in “Fifteen Million Merits,” nor is it to argue that the people behind these devices should be depicted as cackling corporate monsters – but it would be helpful to have at least some image of the people behind these devices. After all, there are people behind these devices. What were they thinking? Were they not aware of these potential risks? Did they not care? Who bears responsibility? In focusing on the small scale human stories Black Mirror ignores the fact that there is another all too human story behind all of these technologies. Thus what the program riskily replicates is a sort of technological determinism that seems to have nestled itself into the way that people talk about technology these days – a sentiment in which people have no choice but to accept (and buy) what technology firms are selling them. It is not so much, to borrow a line from Star Trek, that “resistance is futile” as that nobody seems to have even considered resistance to be an option in the first place. Granted, we have seen in the not too distant past that such a sentiment is simply not true – Google Glass was once presented as inevitable but public push-back helped lead to Google (at least temporarily) shelving the device. Alas, one of the most effective ways of convincing people that they are powerless to resist is by bludgeoning them with cultural products that tell them they are powerless to resist. Or better yet, convince them that they will actually like being “assimilated.”

    Therefore, the key thing to mull over after watching an episode of Black Mirror is not what is presented in the episode but what has been left out. Viewers need to ask the questions the show does not present: who is behind these technologies? What decisions have led to the societal acceptance of these technologies? Did anybody offer resistance to these new technologies? The “6 Questions to Ask of New Technology” posed by media theorist Neil Postman may be of use for these purposes, as might some of the questions posed in Riddled With Questions. The emphasis here is to point out that a danger of Black Mirror is that the viewer winds up being just like one of the characters : a person who simply accepts the technologically wrought world in which they are living without questioning those responsible and without thinking that opposition is possible.

    [Interrogation 2: Utopia Unhinged is not a Dystopia]

    “Dystopia” is a term that has become a fairly prominent feature in popular entertainment today. Bookshelves are filled with tales of doomed futures and many of these titles (particularly those aimed at the “young adult” audience) have a tendency to eventually reach the screens of the cinema. Of course, apocalyptic visions of the future are not limited to the big screen – as numerous television programs attest. For many, it is tempting to use terms such as “dystopia” when discussing the futures portrayed in Black Mirror and yet the usage of such a term seems rather misleading. True, at least one episode (“Fifteen Million Merits”) is clearly meant to evoke a dystopian far future, but to use that term in relation to many of the other installments seems a bit hyperbolic. After all, “The Waldo Moment” could be set tomorrow and frankly “The National Anthem” could have been set yesterday. To say that Black Mirror is a dystopian show risks taking an overly simplistic stance towards technology in the present as well as towards technology in the future – if the claim is that the show is thoroughly dystopian than how does one account for the episodes that may as well be set in the present? One can argue that the state of the present world is far less than ideal, one can cast a withering gaze in the direction of social media, one can truly believe that the current trajectory (if not altered) will lead in a negative direction…and yet one can believe all of these things and still resist the urge to label contemporary society a dystopia. Doom saying can be an enjoyably nihilistic way to pass an afternoon, but it makes for a rather poor critique.

    It may be that what Black Mirror shows is how a dystopia can actually be a private hell instead of a societal one (which would certainly seem true of “White Bear” or “The Complete History of You”), or perhaps what Black Mirror indicates is that a derailed utopia is not automatically a dystopia. Granted, a major criticism of Black Mirror could emphasize that the show has a decidedly “industrialized world/Western world” focus – we do not see the factories where “grains” are manufactured and the varieties of new smart phones seen in the program suggest that the e-waste must be piling up somewhere. In other words – the derailed utopia of some could still be an outright dystopia for countless others. That the characters in Black Mirror do not seem particularly concerned with who assembled their devices is, alas, a feature all too characteristic of technology users today. Nevertheless, to restate the problem, the issue is not so much the threat of dystopia as it is the continued failure of humanity to use its impressive technological ingenuity to bring about a utopia (or even something “better” than the present). In some ways this provides an echo of Lewis Mumford’s comment, in The Story of Utopias, that:

    “it would be so easy, this business of making over the world if it were only a matter of creating machinery.” (Mumford, 175)

    True, the worlds of Black Mirror, including the ones depicting the world of today, show that “creating machinery” actually is an easy way “of making over the world” – however this does not automatically push things in the utopian direction for which Mumford was pining. Instead what is on display is another installment of the deferred potential of technology.

    The term “another” is not used incidentally here, but is specifically meant to point to the fact that it is nothing new for people to see technology as a source for hope…and then to woefully recognize the way in which such hopes have been dashed time and again. Such a sentiment is visible in much of Walter Benjamin’s writing about technology – writing, as he was, after the mechanized destruction of WWI and on the eve of the technologically enhanced barbarity of WWII. In Benjamin’s essay “Eduard Fuchs, Collector and Historian ” he criticizes a strain in positivist/social democratic thinking that had emphasized that technological developments would automatically usher in a more just world, when in fact such attitudes woefully failed to appreciate the scale of the dangers. This leads Benjamin to note:

    “A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the past century: the bungled reception of technology. The process has consisted of a series of energetic, constantly renewed efforts, all attempting to overcome the fact that technology serves this society only by producing commodities.” (Benjamin, 266)

    The century about which Benjamin was writing was not the twenty-first century, and yet these comments about “the bungled reception of technology” and technology which “serves this society only be producing commodities” seems a rather accurate description of the worlds depicted by Black Mirror. And yes, that certainly includes the episodes that are closer to our own day. The point of pulling out this tension; however, is to emphasize not the dystopian element of Black Mirror but to point to the “bungled reception” that is so clearly on display in the program – and by extension in the present day.

    What Black Mirror shows in episode after episode (even in the clearly dystopian one) is the gloomy juxtaposition between what humanity can possibly achieve and what it actually achieves. The tools that could widen democratic participation can be used to allow a cartoon bear to run as a stunt candidate, the devices that allow us to remember the past can ruin the present by keeping us constantly replaying our memories yesterday, the things that can allow us to connect can make it so that we are unable to ever let go – “energetic, constantly renewed efforts” that all wind up simply “producing commodities.” Indeed, in a tragic-comic turn, Black Mirror demonstrates that amongst the commodities we continue to produce are those that elevate the “bungled reception of technology” to the level of a widely watched and critically lauded television serial.

    The future depicted by Black Mirror may be startling, disheartening and quite depressing, but (except in the cases where the content is explicitly dystopian) it is worth bearing in mind that there is an important difference between dystopia and a world of people living amidst the continued “bungled reception of technology.” Are the people in “The National Anthem” paving the way for “White Bear” and in turn setting the stage for “Fifteen Million Merits?” It is quite possible. But this does not mean that the “reception of technology” must always be “bungled” – though changing our reception of it may require altering our attitude towards it. Here Black Mirror repeats its problematic thrust, for it does not highlight resistance but emphasizes the very attitudes that have “bungled” the reception and which continue to bungle the reception. Though “Fifteen Million Merits” does feature a character engaging in a brave act of rebellion, this act is immediately used to strengthen the very forces against which the character is rebelling – and thus the episode repeats the refrain “don’t bother resisting, it’s too late anyways.” This is not to suggest that one should focus all one’s hopes upon a farfetched utopian notion, or put faith in a sense of “hope” that is not linked to reality, nor does it mean that one should don sackcloth and begin mourning. Dystopias are cheap these days, but so are the fake utopian dreams that promise a world in which somehow technology will solve all of our problems. And yet, it is worth bearing in mind another comment from Mumford regarding the possibility of utopia:

    “we cannot ignore our utopias. They exist in the same way that north and south exist; if we are not familiar with their classical statements we at least know them as they spring to life each day in our minds. We can never reach the points of the compass; and so no doubt we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all.” (Mumford, 28/29)

    Black Mirror provides a stark portrait of the fake utopian lure that can lead us to the world to which we do not want to go – a world in which the “bungled reception of technology” continues to rule – but in staring horror struck at where we do not want to go we should not forget to ask where it is that we do want to go. The worlds of Black Mirror are steps in the wrong direction – so ask yourself: what would the steps in the right direction look like?

    [Final Interrogation – Permission to Panic]

    During “The Complete History of You” several characters enjoy a dinner party in which the topic of discussion eventually turns to the benefits and drawbacks of the memory recording “grains.” Many attitudes towards the “grains” are voiced – ranging from individuals who cannot imagine doing without the “grain” to a woman who has had hers violently removed and who has managed to adjust. While “The Complete History of You” focuses on an obsessed individual who cannot cope with a world in which everything can be remembered what the dinner party demonstrates is that the same world contains many people who can handle the “grains” just fine. The failed comedian who voices the cartoon bear in “The Waldo Moment” cannot understand why people are drawn to vote for the character he voices – but this does not stop many people from voting for the animated animal. Perhaps most disturbingly the woman at the center of “White Bear” cannot understand why she is followed by crowds filming her on their smart phones while she is hunted by masked assailants – but this does not stop those filming her from playing an active role in her torture. And so on…and so on…Black Mirror shows that in these horrific worlds, there are many people who are quite content with the new status quo. But that not everybody is despairing simply attests to Theodor Adorno and Max Horkheimer’s observation that:

    “A happy life in a world of horror is ignominiously refuted by the mere existence of that world. The latter therefore becomes the essence, the former negligible.” (Adorno and Horkheimer, 93)

    Black Mirror is a complex program, made all the more difficult to consider as the anthology character of the show makes each episode quite different in terms of the issues that it dwells upon. The attitudes towards technology and society that are subtly suggested in the various episodes are in line with the despairing aura that surrounds the various protagonists and antagonists of the episodes. Yet, insofar as Black Mirror advances an ethos it is one of inured acceptance – it is a satire that is both tragedy and comedy. The first episode of the program, “The National Anthem,” is an indictment of a society that cannot tear itself away from the horrors being depicted on screens in a television show that owes its success to keeping people transfixed to horrors being depicted on their screens. The show holds up a “black mirror” to society but what it shows is a world in which the tables are rigged and the audience has already lost – it is a magnificently troubling cultural product that attests to the way the culture industry can (to return to Ellul) provide the antidote even as it distills the poison. Or, to quote Adorno and Horkheimer again (swap out the word “filmgoers” with “tv viewers”):

    “The permanently hopeless situations which grind down filmgoers in daily life are transformed by their reproduction, in some unknown way, into a promise that they may continue to exist. The one needs only to become aware of one’s nullity, to subscribe to one’s own defeat, and one is already a party to it. Society is made up of the desperate and thus falls prey to rackets.” (Adorno and Horkheimer, 123)

    This is the danger of Black Mirror that it may accustom and inure its viewers to the ugly present it displays while preparing them to fall prey to the “bungled reception” of tomorrow – it inculcates the ethos of “one’s own defeat.” By showing worlds in which people are helpless to do anything much to challenge the technological society in which they have become cogs Black Mirror risks perpetuating the sense that the viewers are themselves cogs, that the viewers are themselves helpless. There is an uncomfortable kinship between the tv viewing characters of “The National Anthem” and the real world viewer of the episode “The National Anthem” – neither party can look away. Or, to put it more starkly: if you are unable to alter the future why not simply prepare yourself for it by watching more episodes of Black Mirror? At least that way you will know which characters not to imitate.

    And yet, despite these critiques, it would be unwise to fully disregard the program. It is easy to pull out comments from the likes of Ellul, Adorno, Horkheimer and Mumford that eviscerate a program such as Black Mirror but it may be more important to ask: given Black Mirror’s shortcomings, what value can the show still have? Here it is useful to recall a comment from Günther Anders (whose pessimism was on par with, or exceeded, any of the aforementioned thinkers) – he was referring in this comment to the works of Kafka, but the comment is still useful:

    “from great warnings we should be able to learn, and they should help us to teach others.” (Anders, 98)

    This is where Black Mirror can be useful, not as a series that people sit and watch, but as a piece of culture that leads people to put forth the questions that the show jumps over. At its best what Black Mirror provides is a space in which people can discuss their fears and anxieties about technology without worrying that somebody will, farcically, call them a “Luddite” for daring to have such concerns – and for this reason alone the show may be worthwhile. By highlighting the questions that go unanswered in Black Mirror we may be able to put forth the very queries that are rarely made about technology today. It is true that the reflections seen by staring into Black Mirror are dark, warped and unappealing – but such reflections are only worth something if they compel audiences to rethink their relationships to the black mirrored surfaces in their lives today and which may be in their lives tomorrow. After all, one can look into the mirror in order to see the dirt on one’s face or one can look in the mirror because of a narcissistic urge. The program certainly has the potential to provide a useful reflection, but as with the technology depicted in the show, it is all too easy for such a potential reception to be “bungled.”

    If we are spending too much time gazing at black mirrors, is the solution really to stare at Black Mirror?

    The show may be a satire, but if all people do is watch, then the joke is on the audience.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. Dialectic of Enlightenment: Philosophical Fragments. Stanford: Stanford University Press, 2002.
    • Anders, Günther. Franz Kafka. New York: Hilary House Publishers LTD, 1960.
    • Benjamin, Walter. Walter Benjamin: Selected Writings. Volume 3, 1935-1938. Cambridge: The Belknap Press, 2002.
    • Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.
    • Mumford, Lewis. The Story of Utopias. Bibliobazaar, 2008.
  • The Automatic Teacher

    The Automatic Teacher

    By Audrey Watters
    ~

    “For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926

    Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).

    It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

    Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”

    pressey_machines

    The Automatic Teacher

    Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:

    The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:

    To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.

    To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.

    The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.

    It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.

    A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:

    [youtube https://www.youtube.com/watch?v=n7OfEXWuulg?rel=0]

    Market Failure

    UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.

    In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.

    Petrina writes that,

    After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.

    ad_pressey

    The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.

    The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)

    The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.

    Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”

    But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”

    Indeed.

    Automation and Individualization

    In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that

    “Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”

    Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.

    But as Pretina argues, “the effect of automation was control and standardization.”

    The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.

    The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • Artificial Intelligence as Alien Intelligence

    Artificial Intelligence as Alien Intelligence

    By Dale Carrico
    ~

    Science fiction is a genre of literature in which artifacts and techniques humans devise as exemplary expressions of our intelligence result in problems that perplex our intelligence or even bring it into existential crisis. It is scarcely surprising that a genre so preoccupied with the status and scope of intelligence would provide endless variations on the conceits of either the construction of artificial intelligences or contact with alien intelligences.

    Of course, both the making of artificial intelligence and making contact with alien intelligence are organized efforts to which many humans are actually devoted, and not simply imaginative sites in which writers spin their allegories and exhibit their symptoms. It is interesting that after generations of failure the practical efforts to construct artificial intelligence or contact alien intelligence have often shunted their adherents to the margins of scientific consensus and invested these efforts with the coloration of scientific subcultures: While computer science and the search for extraterrestrial intelligence both remain legitimate fields of research, both AI and aliens also attract subcultural enthusiasms and resonate with cultic theology, each attracts its consumer fandoms and public Cons, each has its True Believers and even its UFO cults and Robot cults at the extremities.

    Champions of artificial intelligence in particular have coped in many ways with the serial failure of their project to achieve its desired end (which is not to deny that the project has borne fruit) whatever the confidence with which generation after generation of these champions have insisted that desired end is near. Some have turned to more modest computational ambitions, making useful software or mischievous algorithms in which sad vestiges of the older dreams can still be seen to cling. Some are simply stubborn dead-enders for Good Old Fashioned AI‘s expected eventual and even imminent vindication, all appearances to the contrary notwithstanding. And still others have doubled down, distracting attention from the failures and problems bedeviling AI discourse simply by raising its pitch and stakes, no longer promising that artificial intelligence is around the corner but warning that artificial super-intelligence is coming soon to end human history.

    alien planet

    Another strategy for coping with the failure of artificial intelligence on its conventional terms has assumed a higher profile among its champions lately, drawing support for the real plausibility of one science-fictional conceit — construction of artificial intelligence — by appealing to another science-fictional conceit, contact with alien intelligence. This rhetorical gambit has often been conjoined to the compensation of failed AI with its hyperbolic amplification into super-AI which I have already mentioned, and it is in that context that I have written about it before myself. But in a piece published a few days ago in The New York Times, “Outing A.I.: Beyond the Turing Test,” Benjamin Bratton, a professor of visual arts at U.C. San Diego and Director of a design think-tank, has elaborated a comparatively sophisticated case for treating artificial intelligence as alien intelligence with which we can productively grapple. Near the conclusion of his piece Bratton declares that “Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers.” Of course these figures made their headlines by making the arguments about super-intelligence I have already rejected, and mentioning them seems to indicate Bratton’s sympathy with their gambit and even suggests that his argument aims to help us to understand them better on their own terms. Nevertheless, I take Bratton’s argument seriously not because of but in spite of this connection. Ultimately, Bratton makes a case for understanding AI as alien that does not depend on the deranging hyperbole and marketing of robocalypse or robo-rapture for its force.

    In the piece, Bratton claims “Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.” The point is, of course, well taken, and the litany he rehearses to illustrate it is enormously familiar by now as he proceeds to survey popular images from Kubrick’s HAL to Jonze’s Her and to document public deliberation about the significance of computation articulated through such imagery as the “rise of the machines” in the Terminator franchise or the need for Asimov’s famous fictional “Three Laws of Robotics.” It is easy — and may nonetheless be quite important — to agree with Bratton’s observation that our computational/media devices lack cruel intentions and are not susceptible to Asimovian consciences, and hence thinking about the threats and promises and meanings of these devices through such frames and figures is not particularly helpful to us even though we habitually recur to them by now. As I say, it would be easy and important to agree with such a claim, but Bratton’s proposal is in fact somewhat a different one:

    [A] mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits. This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life.

    Here the serial failure of the program of artificial intelligence is redeemed simply by declaring victory. Bratton demonstrates that crying uncle does not preclude one from still crying wolf. It’s not that Siri is some sickly premonition of the AI-daydream still endlessly deferred, but that it represents the real rise of what robot cultist Hans Moravec once promised would be our “mind children” but here and now as elfen aliens with an intelligence unto themselves. It’s not that calling a dumb car a “smart” car is simply a hilarious bit of obvious marketing hyperbole, but represents the recognition of a new order of intelligent machines among us. Rather than criticize the way we may be “amplifying its risks and retarding its benefits” by reading computation through the inapt lens of intelligence at all, he proposes that we should resist holding machine intelligence to the standards that have hitherto defined it for fear of making its recognition “too difficult.”

    The kernel of legitimacy in Bratton’s inquiry is its recognition that “intelligence is notoriously difficult to define and human intelligence simply can’t exhaust the possibilities.” To deny these modest reminders is to indulge in what he calls “the pretentious folklore” of anthropocentrism. I agree that anthropocentrism in our attributions of intelligence has facilitated great violence and exploitation in the world, denying the dignity and standing of Cetaceans and Great Apes, but has also facilitated racist, sexist, xenophobic travesties by denigrating humans as beastly and unintelligent objects at the disposal of “intelligent” masters. “Some philosophers write about the possible ethical ‘rights’ of A.I. as sentient entities, but,” Bratton is quick to insist, “that’s not my point here.” Given his insistence that the “advent of robust inhuman A.I.” will force a “reality-based” “disenchantment” to “abolish the false centrality and absolute specialness of human thought and species-being” which he blames in his concluding paragraph with providing “theological and legislative comfort to chattel slavery” it is not entirely clear to me that emancipating artificial aliens is not finally among the stakes that move his argument whatever his protestations to the contrary. But one can forgive him for not dwelling on such concerns: the denial of an intelligence and sensitivity provoking responsiveness and demanding responsibilities in us all to women, people of color, foreigners, children, the different, the suffering, nonhuman animals compels defensive and evasive circumlocutions that are simply not needed to deny intelligence and standing to an abacus or a desk lamp. It is one thing to warn of the anthropocentric fallacy but another to indulge in the pathetic fallacy.

    Bratton insists to the contrary that his primary concern is that anthropocentrism skews our assessment of real risks and benefits. “Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for.” And of course he is right. The champions of AI have been more than complicit in this popular conception, eager to attract attention and funds for their project among technoscientific illiterates drawn to such dramatic narratives. But we are distracted from the real risks of computation so long as we expect risks to arise from a machinic malevolence that has never been on offer nor even in the offing. Writes Bratton: “Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.”

    But surely the inevitable question posed by Bratton’s disenchanting expose at this point should be: Why, once we have set aside the pretentious folklore of machines with diabolical malevolence, do we not set aside as no less pretentiously folkloric the attribution of diabolical indifference to machines? Why, once we have set aside the delusive confusion of machine behavior with (actual or eventual) human intelligence, do we not set aside as no less delusive the confusion of machine behavior with intelligence altogether? There is no question were a gigantic bulldozer with an incapacitated driver to swerve from a construction site onto a crowded city thoroughfare this would represent a considerable threat, but however tempting it might be in the fraught moment or reflective aftermath poetically to invest that bulldozer with either agency or intellect it is clear that nothing would be gained in the practical comprehension of the threat it poses by so doing. It is no more helpful now in an epoch of Greenhouse storms than it was for pre-scientific storytellers to invest thunder and whirlwinds with intelligence. Although Bratton makes great play over the need to overcome folkloric anthropocentrism in our figuration of and deliberation over computation, mystifying agencies and mythical personages linger on in his accounting however he insists on the alienness of “their” intelligence.

    Bratton warns us about the “infrastructural A.I.” of high-speed financial trading algorithms, Google and Amazon search algorithms, “smart” vehicles (and no doubt weaponized drones and autonomous weapons systems would count among these), and corporate-military profiling programs that oppress us with surveillance and harass us with targeted ads. I share all of these concerns, of course, but personally insist that our critical engagement with infrastructural coding is profoundly undermined when it is invested with insinuations of autonomous intelligence. In “Art in the Age of Mechanical Reproducibility,” Walter Benjamin pointed out that when philosophers talk about the historical force of art they do so with the prejudices of philosophers: they tend to write about those narrative and visual forms of art that might seem argumentative in allegorical and iconic forms that appear analogous to the concentrated modes of thought demanded by philosophy itself. Benjamin proposed that perhaps the more diffuse and distracted ways we are shaped in our assumptions and aspirations by the durable affordances and constraints of the made world of architecture and agriculture might turn out to drive history as much or even more than the pet artforms of philosophers do. Lawrence Lessig made much the same point when he declared at the turn of the millennium that “Code Is Law.”

    It is well known that special interests with rich patrons shape the legislative process and sometimes even explicitly craft legislation word for word in ways that benefit them to the cost and risk of majorities. It is hard to see how our assessment of this ongoing crime and danger would be helped and not hindered by pretending legislation is an autonomous force exhibiting an alien intelligence, rather than a constellation of practices, norms, laws, institutions, ritual and material artifice, the legacy of the historical play of intelligent actors and the site for the ongoing contention of intelligent actors here and now. To figure legislation as a beast or alien with a will of its own would amount to a fetishistic displacement of intelligence away from the actual actors actually responsible for the forms that legislation actually takes. It is easy to see why such a displacement is attractive: it profitably abets the abuses of majorities by minorities while it absolves majorities from conscious complicity in the terms of their own exploitation by laws made, after all, in our names. But while these consoling fantasies have an obvious allure this hardly justifies our endorsement of them.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that the collapse of global finance in 2008 represented the working of inscrutable artificial intelligences facilitating rapid transactions and supporting novel financial instruments of what was called by Long Boom digerati the “new economy.” I wrote:

    It is not computers and programs and autonomous techno-agents who are the protagonists of the still unfolding crime of predatory plutocratic wealth-concentration and anti-democratizing austerity. The villains of this bloodsoaked epic are the bankers and auditors and captured-regulators and neoliberal ministers who employed these programs and instruments for parochial gain and who then exonerated and rationalized and still enable their crimes. Our financial markets are not so complex we no longer understand them. In fact everybody knows exactly what is going on. Everybody understands everything. Fraudsters [are] engaged in very conventional, very recognizable, very straightforward but unprecedentedly massive acts of fraud and theft under the cover of lies.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that our discomfiture in the setting of ubiquitous algorithmic mediation results from an autonomous force over which humans intentions are secondary considerations. I wrote:

    [W]hat imaginary scene is being conjured up in this exculpatory rhetoric in which inadvertent cruelty is ‘coming from code’ as opposed to coming from actual persons? Aren’t coders actual persons, for example? … [O]f course I know what [is] mean[t by the insistence…] that none of this was ‘a deliberate assault.’ But it occurs to me that it requires the least imaginable measure of thought on the part of those actually responsible for this code to recognize that the cruelty of [one user’s] confrontation with their algorithm was the inevitable at least occasional result for no small number of the human beings who use Facebook and who live lives that attest to suffering, defeat, humiliation, and loss as well as to parties and promotions and vacations… What if the conspicuousness of [this] experience of algorithmic cruelty indicates less an exceptional circumstance than the clarifying exposure of a more general failure, a more ubiquitous cruelty? … We all joke about the ridiculous substitutions performed by autocorrect functions, or the laughable recommendations that follow from the odd purchase of a book from Amazon or an outing from Groupon. We should joke, but don’t, when people treat a word cloud as an analysis of a speech or an essay. We don’t joke so much when a credit score substitutes for the judgment whether a citizen deserves the chance to become a homeowner or start a small business, or when a Big Data profile substitutes for the judgment whether a citizen should become a heat signature for a drone committing extrajudicial murder in all of our names. [An] experience of algorithmic cruelty [may be] extraordinary, but that does not mean it cannot also be a window onto an experience of algorithmic cruelty that is ordinary. The question whether we might still ‘opt out’ from the ordinary cruelty of algorithmic mediation is not a design question at all, but an urgent political one.

    I have already written in the past about those who want to propose, as Bratton seems inclined to do in the present, that so-called Killer Robots are a threat that must be engaged by resisting or banning “them” in their alterity rather than by assigning moral and criminal responsibility on those who code, manufacture, fund, and deploy them. I wrote:

    Well-meaning opponents of war atrocities and engines of war would do well to think how tech companies stand to benefit from military contracts for ‘smarter’ software and bleeding-edge gizmos when terrorized and technoscientifically illiterate majorities and public officials take SillyCon Valley’s warnings seriously about our ‘complacency’ in the face of truly autonomous weapons and artificial super-intelligence that do not exist. It is crucial that necessary regulation and even banning of dangerous ‘autonomous weapons’ proceeds in a way that does not abet the mis-attribution of agency, and hence accountability, to devices. Every ‘autonomous’ weapons system expresses and mediates decisions by responsible humans usually all too eager to disavow the blood on their hands. Every legitimate fear of ‘killer robots’ is best addressed by making their coders, designers, manufacturers, officials, and operators accountable for criminal and unethical tools and uses of tools… There simply is no such thing as a smart bomb. Every bomb is stupid. There is no such thing as an autonomous weapon. Every weapon is deployed. The only killer robots that actually exist are human beings waging and profiting from war.

    “Arguably,” argues Bratton, “the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image. We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity… This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I.” The Anthropocene in this formulation names the emergence of environmental or planetary consciousness, an emergence sometimes coupled to the global circulation of the image of the fragility and interdependence of the whole earth as seen by humans from outer space. It is the recognition that the world in which we evolved to flourish might be impacted by our collective actions in ways that threaten us all. Notice, by the way, that multiculture and historical struggle are figured as just another “algorithm” here.

    I do not agree that planetary catastrophe inevitably followed from the conception of the earth as a gift besetting us to sustain us, indeed this premise understood in terms of stewardship or commonwealth would go far in correcting and preventing such careless destruction in my opinion. It is the false and facile (indeed infantile) conception of a finite world somehow equal to infinite human desires that has landed us and keeps us delusive ignoramuses lodged in this genocidal and suicidal predicament. Certainly I agree with Bratton that it would be wrong to attribute the waste and pollution and depletion of our common resources by extractive-industrial-consumer societies indifferent to ecosystemic limits to “technology run amok.” The problem of so saying is not that to do so disrespects “technology” — as presumably in his view no longer treating machines as properly “subservient to the needs and wishes of humanity” would more wholesomely respect “technology,” whatever that is supposed to mean — since of course technology does not exist in this general or abstract way to be respected or disrespected.

    The reality at hand is that humans are running amok in ways that are facilitated and mediated by certain technologies. What is demanded in this moment by our predicament is the clear-eyed assessment of the long-term costs, risks, and benefits of technoscientific interventions into finite ecosystems to the actual diversity of their stakeholders and the distribution of these costs, risks, and benefits in an equitable way. Quite a lot of unsustainable extractive and industrial production as well as mass consumption and waste would be rendered unprofitable and unappealing were its costs and risks widely recognized and equitably distributed. Such an understanding suggests that what is wanted is to insist on the culpability and situation of actually intelligent human actors, mediated and facilitated as they are in enormously complicated and demanding ways by technique and artifice. The last thing we need to do is invest technology-in-general or environmental-forces with alien intelligence or agency apart from ourselves.

    I am beginning to wonder whether the unavoidable and in many ways humbling recognition (unavoidable not least because of environmental catastrophe and global neoliberal precarization) that human agency emerges out of enormously complex and dynamic ensembles of interdependent/prostheticized actors gives rise to compensatory investments of some artifacts — especially digital networks, weapons of mass destruction, pandemic diseases, environmental forces — with the sovereign aspect of agency we no longer believe in for ourselves? It is strangely consoling to pretend our technologies in some fancied monolithic construal represent the rise of “alien intelligences,” even threatening ones, other than and apart from ourselves, not least because our own intelligence is an alienated one and prostheticized through and through. Consider the indispensability of pedagogical techniques of rote memorization, the metaphorization and narrativization of rhetoric in songs and stories and craft, the technique of the memory palace, the technologies of writing and reading, the articulation of metabolism and duration by timepieces, the shaping of both the body and its bearing by habit and by athletic training, the lifelong interplay of infrastructure and consciousness: all human intellect is already technique. All culture is prosthetic and all prostheses are culture.

    Bratton wants to narrate as a kind of progressive enlightenment the mystification he recommends that would invest computation with alien intelligence and agency while at once divesting intelligent human actors, coders, funders, users of computation of responsibility for the violations and abuses of other humans enabled and mediated by that computation. This investment with intelligence and divestment of responsibility he likens to the Copernican Revolution in which humans sustained the momentary humiliation of realizing that they were not the center of the universe but received in exchange the eventual compensation of incredible powers of prediction and control. One might wonder whether the exchange of the faith that humanity was the apple of God’s eye for a new technoscientific faith in which we aspired toward godlike powers ourselves was really so much a humiliation as the exchange of one megalomania for another. But what I want to recall by way of conclusion instead is that the trope of a Copernican humiliation of the intelligent human subject is already quite a familiar one:

    In his Introductory Lectures on Psychoanalysis Sigmund Freud notoriously proposed that

    In the course of centuries the naive self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the center of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus… The second blow fell when biological research de­stroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin… though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on un­consciously in the mind.

    However we may feel about psychoanalysis as a pseudo-scientific enterprise that did more therapeutic harm than good, Freud’s works considered instead as contributions to moral philosophy and cultural theory have few modern equals. The idea that human consciousness is split from the beginning as the very condition of its constitution, the creative if self-destructive result of an impulse of rational self-preservation beset by the overabundant irrationality of humanity and history, imposed a modesty incomparably more demanding than Bratton’s wan proposal in the same name. Indeed, to the extent that the irrational drives of the dynamic unconscious are often figured as a brute machinic automatism, one is tempted to suggest that Bratton’s modest proposal of alien artifactual intelligence is a fetishistic disavowal of the greater modesty demanded by the alienating recognition of the stratification of human intelligence by unconscious forces (and his moniker a symptomatic citation). What is striking about the language of psychoanalysis is the way it has been taken up to provide resources for imaginative empathy across the gulf of differences: whether in the extraordinary work of recent generations of feminist, queer, and postcolonial scholars re-orienting the project of the conspicuously sexist, heterosexist, cissexist, racist, imperialist, bourgeois thinker who was Freud to emancipatory ends, or in the stunning leaps in which Freud identified with neurotic others through psychoanalytic reading, going so far as to find in the paranoid system-building of the psychotic Dr. Schreber an exemplar of human science and civilization and a mirror in which he could see reflected both himself and psychoanalysis itself. Freud’s Copernican humiliation opened up new possibilities of responsiveness in difference out of which could be built urgently necessary responsibilities otherwise. I worry that Bratton’s Copernican modesty opens up new occasions for techno-fetishistic fables of history and disavowals of responsibility for its actual human protagonists.
    _____

    Dale Carrico is a member of the visiting faculty at the San Francisco Art Institute as well as a lecturer in the Department of Rhetoric at the University of California at Berkeley from which he received his PhD in 2005. His work focuses on the politics of science and technology, especially peer-to-peer formations and global development discourse and is informed by a commitment to democratic socialism (or social democracy, if that freaks you out less), environmental justice critique, and queer theory. He is a persistent critic of futurological discourses, especially on his Amor Mundi blog, on which an earlier version of this post first appeared.

    Back to the essay

  • Something About the Digital

    Something About the Digital

    By Alexander R. Galloway
    ~

    (This catalog essay was written in 2011 for the exhibition “Chaos as Usual,” curated by Hanne Mugaas at the Bergen Kunsthall in Norway. Artists in the exhibition included Philip Kwame Apagya, Ann Craven, Liz Deschenes, Thomas Julier [in collaboration with Cédric Eisenring and Kaspar Mueller], Olia Lialina and Dragan Espenschied, Takeshi Murata, Seth Price, and Antek Walczak.)

    There is something about the digital. Most people aren’t quite sure what it is. Or what they feel about it. But something.

    In 2001 Lev Manovich said it was a language. For Steven Shaviro, the issue is being connected. Others talk about “cyber” this and “cyber” that. Is the Internet about the search (John Battelle)? Or is it rather, even more primordially, about the information (James Gleick)? Whatever it is, something is afoot.

    What is this something? Given the times in which we live, it is ironic that this term is so rarely defined and even more rarely defined correctly. But the definition is simple: the digital means the one divides into two.

    Digital doesn’t mean machine. It doesn’t mean virtual reality. It doesn’t even mean the computer – there are analog computers after all, like grandfather clocks or slide rules. Digital means the digits: the fingers and toes. And since most of us have a discrete number of fingers and toes, the digital has come to mean, by extension, any mode of representation rooted in individually separate and distinct units. So the natural numbers (1, 2, 3, …) are aptly labeled “digital” because they are separate and distinct, but the arc of a bird in flight is not because it is smooth and continuous. A reel of celluloid film is correctly called “digital” because it contains distinct breaks between each frame, but the photographic frames themselves are not because they record continuously variable chromatic intensities.

    We must stop believing the myth, then, about the digital future versus the analog past. For the digital died its first death in the continuous calculus of Newton and Leibniz, and the curvilinear revolution of the Baroque that came with it. And the digital has suffered a thousand blows since, from the swirling vortexes of nineteenth-century thermodynamics, to the chaos theory of recent decades. The switch from analog computing to digital computing in the middle twentieth century is but a single battle in the multi-millennial skirmish within western culture between the unary and the binary, proportion and distinction, curves and jumps, integration and division – in short, over when and how the one divides into two.

    What would it mean to say that a work of art divides into two? Or to put it another way, what would art look like if it began to meditate on the one dividing into two? I think this is the only way we can truly begin to think about “digital art.” And because of this we shall leave Photoshop, and iMovie, and the Internet and all the digital tools behind us, because interrogating them will not nearly begin to address these questions. Instead look to Ann Craven’s paintings. Or look to the delightful conversation sparked here between Philip Kwame Apagya and Liz Deschenes. Or look to the work of Thomas Julier, even to a piece of his not included in the show, “Architecture Reflecting in Architecture” (2010, made with Cedric Eisenring), which depicts a rectilinear cityscape reflected inside the mirror skins of skyscrapers, just like Saul Bass’s famous title sequence in North By Northwest (1959).

    DSC_0002__560
    Liz Deschenes, “Green Screen 4” (2001)

    All of these works deal with the question of twoness. But it is twoness only in a very particular sense. This is not the twoness of the doppelganger of the romantic period, or the twoness of the “split mind” of the schizophrenic, and neither is it the twoness of the self/other distinction that so forcefully animated culture and philosophy during the twentieth century, particularly in cultural anthropology and then later in poststructuralism. Rather we see here a twoness of the material, a digitization at the level of the aesthetic regime itself.

    Consider the call and response heard across the works featured here by Apagya and Deschenes. At the most superficial level, one might observe that these are works about superimposition, about compositing. Apagya’s photographs exploit one of the oldest and most useful tricks of picture making: superimpose one layer on top of another layer in order to produce a picture. Painters do this all the time of course, and very early on it became a mainstay of photographic technique (even if it often remained relegated to mere “trick” photography), evident in photomontage, spirit photography, and even the side-by-side compositing techniques of the carte de visite popularized by André-Adolphe-Eugène Disdéri in the 1850s. Recall too that the cinema has made productive use of superimposition, adopting the technique with great facility from the theater and its painted scrims and moving backdrops. (Perhaps the best illustration of this comes at the end of A Night at the Opera [1935], when Harpo Marx goes on a lunatic rampage through the flyloft during the opera’s performance, raising and lowering painted backdrops to great comic effect.) So the more “modern” cinematic techniques of, first, rear screen projection, and then later chromakey (known commonly as the “green screen” or “blue screen” effect), are but a reiteration of the much longer legacy of compositing in image making.

    Deschenes’ “Green Screen #4” points to this broad aesthetic history, as it empties out the content of the image, forcing us to acknowledge the suppressed color itself – in this case green, but any color will work. Hence Deschenes gives us nothing but a pure background, a pure something.

    Allowed to curve gracefully off the wall onto the floor, the green color field resembles the “sweep wall” used commonly in portraiture or fashion photography whenever an artist wishes to erase the lines and shadows of the studio environment. “Green Screen #4” is thus the antithesis of what has remained for many years the signal art work about video chromakey, Peter Campus’ “Three Transitions” (1973). Whereas Campus attempted to draw attention to the visual and spatial paradoxes made possible by chromakey, and even in so doing was forced to hide the effect inside the jittery gaps between images, Deschenes by contrast feels no such anxiety, presenting us with the medium itself, minus any “content” necessary to fuel it, minus the powerful mise en abyme of the Campus video, and so too minus Campus’ mirthless autobiographical staging. If Campus ultimately resolves the relationship between images through a version of montage, Deschenes offers something more like a “divorced digitality” in which no two images are brought into relation at all, only the minimal substrate remains, without input or output.

    The sweep wall is evident too in Apagya’s images, only of a different sort, as the artifice of the various backgrounds – in a nod not so much to fantasy as to kitsch – both fuses with and separates from the foreground subject. Yet what might ultimately unite the works by Apagya and Deschenes is not so much the compositing technique, but a more general reference, albeit oblique but nevertheless crucial, to the fact that such techniques are today entirely quotidian, entirely usual. These are everyday folk techniques through and through. One needs only a web cam and simple software to perform chromakey compositing on a computer, just as one might go to the county fair and have one’s portrait superimposed on the body of a cartoon character.

    What I’m trying to stress here is that there is nothing particularly “technological” about digitality. All that is required is a division from one to two – and by extension from two to three and beyond to the multiple. This is why I see layering as so important, for it spotlights an internal separation within the image. Apagya’s settings are digital, therefore, simply by virtue of the fact that he addresses our eye toward two incompatible aesthetic zones existing within the image. The artifice of a painted backdrop, and the pose of a person in a portrait.

    Certainly the digital computer is “digital” by virtue of being binary, which is to say by virtue of encoding and processing numbers at the lowest levels using base-two mathematics. But that is only the most prosaic and obvious exhibit of its digitality. For the computer is “digital” too in its atomization of the universe, into, for example, a million Facebook profiles, all equally separate and discrete. Or likewise “digital” too in the computer interface itself which splits things irretrievably into cursor and content, window and file, or even, as we see commonly in video games, into heads-up-display and playable world. The one divides into two.

    So when clusters of repetition appear across Ann Craven’s paintings, or the iterative layers of the “copy” of the “reconstruction” in the video here by Thomas Julier and Cédric Eisenring, or the accumulations of images that proliferate in Olia Lialina and Dragon Espenschied’s “Comparative History of Classic Animated GIFs and Glitter Graphics” [2007] (a small snapshot of what they have assembled in their spectacular book from 2009 titled Digital Folklore), or elsewhere in works like Oliver Laric’s clipart videos (“787 Cliparts” [2006] and “2000 Cliparts” [2010]), we should not simply recall the famous meditations on copies and repetitions, from Walter Benjamin in 1936 to Gilles Deleuze in 1968, but also a larger backdrop that evokes the very cleavages emanating from western metaphysics itself from Plato onward. For this same metaphysics of division is always already a digital metaphysics as it forever differentiates between subject and object, Being and being, essence and instance, or original and repetition. It shouldn’t come as a surprise that we see here such vivid aesthetic meditations on that same cleavage, whether or not a computer was involved.

    Another perspective on the same question would be to think about appropriation. There is a common way of talking about Internet art that goes roughly as follows: the beginning of net art in the middle to late 1990s was mostly “modernist” in that it tended to reflect back on the possibilities of the new medium, building an aesthetic from the material affordances of code, screen, browser, and jpeg, just as modernists in painting or literature built their own aesthetic style from a reflection on the specific affordances of line, color, tone, or timbre; whereas the second phase of net art, coinciding with “Web 2.0” technologies like blogging and video sharing sites, is altogether more “postmodern” in that it tends to co-opt existing material into recombinant appropriations and remixes. If something like the “WebStalker” web browser or the Jodi.org homepage are emblematic of the first period, then John Michael Boling’s “Guitar Solo Threeway,” Brody Condon’s “Without Sun,” or the Nasty Nets web surfing club, now sadly defunct, are emblematic of the second period.

    I’m not entirely unsatisfied by such a periodization, even if it tends to confuse as many things as it clarifies – not entirely unsatisfied because it indicates that appropriation too is a technique of digitality. As Martin Heidegger signals, by way of his notoriously enigmatic concept Ereignis, western thought and culture was always a process in which a proper relationship of belonging is established in a world, and so too appropriation establishes new relationships of belonging between objects and their contexts, between artists and materials, and between viewers and works of art. (Such is the definition of appropriation after all: to establish a belonging.) This is what I mean when I say that appropriation is a technique of digitality: it calls out a distinction in the object from “where it was prior” to “where it is now,” simply by removing that object from one context of belonging and separating it out into another. That these two contexts are merely different – that something has changed – is evidence enough of the digitality of appropriation. Even when the act of appropriation does not reduplicate the object or rely on multiple sources, as with the artistic ready-made, it still inaugurates a “twoness” in the appropriated object, an asterisk appended to the art work denoting that something is different.

    TMu_Cyborg_2011_18-1024x682
    Takeshi Murata, “Cyborg” (2011)

    Perhaps this is why Takeshi Murata continues his exploration of the multiplicities at the core of digital aesthetics by returning to that age old format, the still life. Is not the still life itself a kind of appropriation, in that it brings together various objects into a relationship of belonging: fig and fowl in the Dutch masters, or here the various detritus of contemporary cyber culture, from cult films to iPhones?

    Because appropriation brings things together it must grapple with a fundamental question. Whatever is brought together must form a relation. These various things must sit side-by-side with each other. Hence one might speak of any grouping of objects in terms of their “parallel” nature, that is to say, in terms of the way in which they maintain their multiple identities in parallel.

    But let us dwell for a moment longer on these agglomerations of things, and in particular their “parallel” composition. By parallel I mean the way in which digital media tend to segregate and divide art into multiple, separate channels. These parallel channels may be quite manifest, as in the separate video feeds that make up the aforementioned “Guitar Solo Threeway,” or they may issue from the lowest levels of the medium, as when video compression codecs divide the moving image into small blocks of pixels that move and morph semi-autonomously within the frame. In fact I have found it useful to speak of this in terms of the “parallel image” in order to differentiate today’s media making from that of a century ago, which Friedrich Kittler and others have chosen to label “serial” after the serial sequences of the film strip, or the rat-ta-tat-tat of a typewriter.

    Thus films like Tatjana Marusic’s “The Memory of a Landscape” (2004) or Takeshi Murata’s “Monster Movie” (2005) are genuinely digital films, for they show parallelity in inscription. Each individual block in the video compression scheme has its own autonomy and is able to write to the screen in parallel with all the other blocks. These are quite literally, then, “multichannel” videos – we might even take a cue from online gaming circles and label them “massively multichannel” videos. They are multichannel not because they require multiple monitors, but because each individual block or “channel” within the image acts as an individual micro video feed. Each color block is its own channel. Thus, the video compression scheme illustrates, through metonymy, how pixel images work in general, and, as I suggest, it also illustrates the larger currents of digitality, for it shows that these images, in order to create “an” image must first proliferate the division of sub-images, which themselves ultimately coalesce into something resembling a whole. In other words, in order to create a “one” they must first bifurcate the single image source into two or more separate images.

    The digital image is thus a cellular and discrete image, consisting of separate channels multiplexed in tandem or triplicate or, greater, into nine, twelve, twenty-four, one hundred, or indeed into a massively parallel image of a virtually infinite visuality.

    For me this generates a more appealing explanation for why art and culture has, over the last several decades, developed a growing anxiety over copies, repetitions, simulations, appropriations, reenactments – you name it. It is common to attribute such anxiety to a generalized disenchantment permeating modern life: our culture has lost its aura and can no longer discern an original from a copy due to endless proliferations of simulation. Such an assessment is only partially correct. I say only partially because I am skeptical of the romantic nostalgia that often fuels such pronouncements. For who can demonstrate with certainty that the past carried with it a greater sense of aesthetic integrity, a greater unity in art? Yet the assessment begins to adopt a modicum of sense if we consider it from a different point of view, from the perspective of a generalized digitality. For if we define the digital as “the one dividing into two,” then it would be fitting to witness works of art that proliferate these same dualities and multiplicities. In other words, even if there was a “pure” aesthetic origin it was a digital origin to begin with. And thus one needn’t fret over it having infected our so-called contemporary sensibilities.

    Instead it is important not to be blinded by the technology. But rather to determine that, within a generalized digitality, there must be some kind of differential at play. There must be something different, and without such a differential it is impossible to say that something is something (rather than something else, or indeed rather than nothing). The one must divide into something else. Nothing less and nothing more is required, only a generic difference. And this is our first insight into the “something” of the digital.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Cultivating Reform and Revolution

    Cultivating Reform and Revolution

    The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism (Duke University Press, 2013)a review of William E. Connolly, The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism (Duke University Press, 2013)
    by Zachary Loeb
    ~

    Mountains and rivers, skyscrapers and dams – the world is filled with objects and structures that appear sturdy. Glancing upwards at a skyscraper, or mountain, a person may know that these obelisks will not remain eternally unchanged, but in the moment of the glance we maintain a certain casual confidence that they are not about to crumble suddenly. Yet skyscrapers collapse, mountains erode, rivers run dry or change course, and dams crack under the pressure of the waters they hold. Even equipped with this knowledge it is still tempting to view such structures as enduringly solid. Perhaps the residents of Lisbon, in November of 1755, had a similar faith in the sturdiness of the city they had built, a faith that was shattered in an earthquake – and aftershocks – that demonstrated all too terribly the fragility at the core of all physical things.

    The Lisbon earthquake, along with its cultural reverberations, provides the point of entry for William E. Connolly’s discussion of neoliberalism, ecology, activism, and the deceptive solidness of the world in his book The Fragility of Things. Beyond its relevance as an example of the natural tremors that can reduce the built world into rubble, the Lisbon earthquake provides Connolly (the Krieger-Eisenhower Professor of Political Science at the Johns Hopkins University), a vantage point from which to mark out and critique a Panglossian worldview he sees as prominent in contemporary society. No doubt, were Voltaire’s Pangloss alive today, he could find ready employment as an apologist for neoliberalism (perhaps as one of Silicon Valley’s evangelists). Like Panglossian philosophy, neoliberalism “acknowledges many evils and treats them as necessary effects” (6).

    Though the world has changed significantly since the mid-18th century during which Voltaire wrote, humanity remains assaulted by events that demonstrate the world’s fragility. Connolly councils against the withdrawal to which the protagonists of Candide finally consign themselves while taking up the famous trope Voltaire develops for that withdrawal; today we “cultivate our gardens” in a world in which the future of all gardens is uncertain. Under the specter of climate catastrophe, “to cultivate our gardens today means to engage the multiform relations late capitalism bears to the entire planet” (6). Connolly argues for an “ethic of cultivation” that can show “both how fragile the ethical life is and how important it is to cultivate it” (17). “Cultivation,” as developed in The Fragility of Things, stands in opposition to withdrawal. Instead it entails serious, ethically guided, activist engagement with the world – for us to recognize the fragility of natural, and human-made, systems (Connolly uses the term “force-fields”) and to act to protect this “fragility” instead of celebrating neoliberal risks that render the already precarious all the more tenuous.

    Connolly argues that when natural disasters strike, and often in their wake set off rippling cascades of additional catastrophes, they exemplify the “spontaneous order” so beloved by neoliberal economics. Under neoliberalism, the market is treated as though it embodies a uniquely omniscient, self-organizing and self-guiding principle. Yet the economic system is not the only one that can be described this way: “open systems periodically interact in ways that support, amplify, or destabilize one another” (25). Even in the so-called Anthropocene era the ecosystem, much to humanity’s chagrin, can still demonstrate creative and unpredictable potentialities. Nevertheless, the ideological core of neoliberalism relies upon celebrating the market’s self-organizing capabilities whilst ignoring the similar capabilities of governments, the public sphere, or the natural world. The ascendancy of neoliberalism runs parallel with an increase in fragility as economic inequality widens and as neoliberalism treats the ecosystem as just another profit source. Fragility is everywhere today, and though the cracks are becoming increasingly visible, it is still given – in Connolly’s estimation – less attention than is its due, even in “radical theory.” On this issue Connolly wonders if perhaps “radical theorists,” and conceivably radical activists, “fear that coming to terms with fragility would undercut the political militancy needed to respond to it?” (32). Yet Connolly sees no choice but to “respond,” envisioning a revitalized Left that can take action with a mixture of advocacy for immediate reforms while simultaneously building towards systemic solutions.

    Critically engaging with the thought of core neoliberal thinker and “spontaneous order” advocate Friedrich Hayek, Connolly demonstrates the way in which neoliberal ideology has been inculcated throughout society, even and especially amongst those whose lives have been made more fragile by neoliberalism: “a neoliberal economy cannot sustain itself unless it is supported by a self-conscious ideology internalized by most participants that celebrates the virtues of market individualism, market autonomy and a minimal state” (58). An army of Panglossian commentators must be deployed to remind the wary watchers that everything is for the best. That a high level of state intervention may be required to bolster and disseminate this ideology, and prop up neoliberalism, is wholly justified in a system that recognizes only neoliberalism as a source for creative self-organizing processes, indeed “sometimes you get the impression that ‘entrepreneurs’ are the sole paradigms of creativity in the Hayekian world” (66). Resisting neoliberalism, for Connolly, requires remembering the sources of creativity that occur outside of a market context and seeing how these other systems demonstrate self-organizing capacities.

    Within neoliberalism the market is treated as the ethical good, but Connolly works to counter this with “an ethic of cultivation” which works not only against neoliberalism but against certain elements of Kant’s philosophy. In Connolly’s estimation Kantian ethics provide some of the ideological shoring up for neoliberalism, as at times “Kant both prefigures some existential demands unconsciously folded into contemporary neoliberalism and reveals how precarious they in fact are. For he makes them postulates” (117). Connolly sees a certain similarity between the social conditioning that Kant saw as necessary for preparing the young to “obey moral law” and the ideological conditioning that trains people for life under neoliberalism – what is shared is a process by which a self-organizing system must counter people’s own self-organizing potential by organizing their reactions. Furthermore “the intensity of cultural desires to invest hopes in the images of self-regulating interest within markets and/or divine providence wards off acknowledgment of the fragility of things” (118). Connolly’s “ethic of cultivation” appears as a corrective to this ethic of inculcation – it features “an element of tragic possibility within it” (133) which is the essential confrontation with the “fragility” that may act as a catalyst for a new radical activism.

    In the face of impending doom neoliberalism will once more have an opportunity to demonstrate its creativity even as this very creativity will have reverberations that will potentially unleash further disasters. Facing the possible catastrophe means that “we may need to recraft the long debate between secular, linear, and deterministic images of the world on the one hand and divinely touched, voluntarist, providential, and/or punitive images on the other” (149). Creativity, and the potential for creativity, is once more essential – as it is the creativity in multiple self-organizing systems that has created the world, for better or worse, around us today. Bringing his earlier discussions of Kant into conversation with the thought of Whitehead and Nietzsche, Connolly further considers the place of creative processes in shaping and reshaping the world. Nietzsche, in particular, provides Connolly with a way to emphasize the dispersion of creativity by removing the province of creativity from the control of God to treat it as something naturally recurring across various “force-fields.” A different demand thus takes shape wherein “we need to slow down and divert human intrusions into various planetary force fields, even as we speed up efforts to reconstitute the identities, spiritualities, consumption practices, market faiths, and state policies entangled with them” (172) though neoliberalism knows but one speed: faster.

    An odd dissonance occurs at present wherein people are confronted with the seeming triumph of neoliberal capitalism (one can hear the echoes of “there is no alternative”) and the warnings pointing to the fragility of things. In this context, for Connolly, withdrawal is irresponsible, it would be to “cultivate a garden” when what is needed is an “ethic of cultivation.” Neoliberal capitalism has trained people to accept the strictures of its ideology, but now is a time when different roles are needed; it is a time to become “role experimentalists” (187). Such experiments may take a variety of forms that run the gamut from “reformist” to “revolutionary” and back again, but the process of such experimentation can break the training of neoliberalism and demonstrate other ways of living, interacting, being and having. Connolly does not put forth a simple solution for the challenges facing humanity, instead he emphasizes how recognizing the “fragility of things” allows for people to come to terms with these challenges. After all, it may be that neoliberalism only appears so solid because we have forgotten that it is not actually a naturally occurring mountain but a human built pyramid – and our backs are its foundation.

    * * *

    In the “First Interlude,” on page 45, Connolly poses a question that haunts the remainder of The Fragility of Things, the question – asked in the midst of a brief discussion of the 2011 Lars von Trier film Melancholia – is, “How do you prepare for the end of the world?” It is the sort of disarming and discomforting question that in its cold honesty forces readers to face a conclusion they may not want to consider. It is a question that evokes the deceptively simple acronym FRED (Facing the Reality of Extinction and Doom). And yet there is something refreshing in the question – many have heard the recommendations about what must be done to halt climate catastrophe, but how many believe these steps will be taken? Indeed, even though Connolly claims “we need to slow down” there are also those who, to the contrary, insist that what is needed is even greater acceleration. Granted, Connolly does not pose this question on the first page of his book, and had he done so The Fragility of Things could have easily appeared as a dismissible dirge. Wisely, Connolly recognizes that “a therapist, a priest, or a philosopher might stutter over such questions. Even Pangloss might hesitate” (45); one of the core strengths of The Fragility of Things is that it does not “stutter over such questions” but realizes that such questions require an honest reckoning. Which includes being willing to ask “How do you prepare for the end of the world?”

    William Connolly’s The Fragility of Things is both ethically and intellectually rigorous, demanding readers perceive the “fragility” of the world around them even as it lays out the ways in which the world around them derives its stability from making that very fragility invisible. Though it may seem that there are relatively simple concerns at the core of The Fragility of Things Connolly never succumbs to simplistic argumentation – preferring the fine-toothed complexity that allows moments of fragility to be fully understood. The tone and style of The Fragility of Things feels as though it assumes its readership will consist primarily of academics, activists, and those who see themselves as both. It is a book that wastes no time trying to convince its reader that “climate change is real” or “neoliberalism is making things worse,” and the book is more easily understood if a reader begins with at least a basic acquaintance with the thought of Hayek, Kant, Whitehead, and Nietzsche. Even if not every reader of The Fragility of Things has dwelled for hours upon the question of “How do you prepare for the end of the world?” the book seems to expect that this question lurks somewhere in the subconscious of the reader.

    Amidst Connolly’s discussions of ethics, fragility and neoliberalism, he devotes much of the book to arguing for the need for a revitalized, active, and committed Left – one that would conceivably do more than hold large marches and then disappear. While Connolly cautions against “giving up” on electoral politics he does evince a distrust for US party politics; to the extent that Connolly appears to be a democrat it is a democrat with a lowercase d. Drawing inspiration from the wave of protests in and around 2011 Connolly expresses the need for a multi-issue, broadly supported, international (and internationalist) Left that can organize effectively to win small-scale local reforms while building the power to truly challenge the grip of neoliberalism. The goal, as Connolly envisions it, is to eventually “mobilize enough collective energy to launch a general strike simultaneously in several countries in the near future” even as Connolly remains cognizant of threats that “the emergence of a neofascist or mafia-type capitalism” can pose (39). Connolly’s focus on the, often slow, “traditional” activist strategies of organizing should not be overlooked, as his focus on mobilizing large numbers of people acts as a retort to a utopian belief that “technology will fix everything.” The “general strike” as the democratic response once electoral democracy has gone awry is a theme that Connolly concludes with as he calls for his readership to take part in helping to bring together “a set of interacting minorities in several countries for the time when we coalesce around a general strike launched in several states simultaneously” (195). Connolly emphasizes the types of localized activism and action that are also necessary, but “the general strike” is iconic as the way to challenge neoliberalism. In emphasizing the “the general strike” Connolly stakes out a position in which people have an obligation to actively challenge existing neoliberalism, waiting for capitalism to collapse due to its own contradictions (and trying to accelerate these contradictions) does not appear as a viable tactic.

    All of which raises something of prickly question for The Fragility of Things: which element of the book strikes the reader as more outlandish, the question of how to prepare for the end of the world, or the prospect of a renewed Left launching “a general strike…in the near future”? This question is not asked idly or as provocation; and the goal here is in no way to traffic in Leftist apocalyptic romanticism. Yet experience in current activism and organizing does not necessarily imbue one with great confidence in the prospect of a city-wide general strike (in the US) to say nothing of an international one. Activists may be acutely aware of the creative potentials and challenges faced by repressed communities, precarious labor, the ecosystem, and so forth – but these same activists are aware of the solidity of militarized police forces, a reactionary culture industry, and neoliberal dominance. Current, committed, activists’ awareness of the challenges they face makes it seem rather odd that Connolly suggests that radical theorists have ignored “fragility.” Indeed many radical thinkers, or at least some (Grace Lee Boggs and Franco “Bifo” Berardi, to name just two) seem to have warned consistently of “fragility” – even if they do not always use that exact term. Nevertheless, here the challenge may not be the Sisyphean work of activism but the rather cynical answer many, non-activists, give to the question of “How does one prepare for the end of the world?” That answer? Download some new apps, binge watch a few shows, enjoy the sci-fi cool of the latest gadget, and otherwise eat, drink and be merry because we’ll invent something to solve tomorrow’s problems next week. Neoliberalism has trained people well.

    That answer, however, is the type that Connolly seems to find untenable, and his apparent hope in The Fragility of Things is that most readers will also find this answer unacceptable. Thus Connolly’s “ethic of cultivation” returns and shows its value again. “Our lives are messages” (185) Connolly writes and thus the actions that an individual takes to defend “fragility” and oppose neoliberalism act as a demonstration to others that different ways of being are possible.

    What The Fragility of Things makes clear is that an “ethic of cultivation” is not a one-off event but an ongoing process – cultivating a garden, after all, is something that takes time. Some gardens require years of cultivation before they start to bear fruit.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Hacker, Hoaxer, Whistleblower, Spy
    a review of Gabriella Coleman, Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (Verso, 2014)
    by Gavin Mueller
    ~

    Gabriella Coleman’s Hacker, Hoaxer, Whistleblower, Spy (HHWS) tackles a difficult and pressing subject: the amorphous hacker organization Anonymous. The book is not a strictly academic work. Rather, it unfolds as a rather lively history of a subculture of geeks, peppered with snippets of cultural theory and autobiographical portions. As someone interested in a more sustained theoretical exposition of Anonymous’s organizing and politics, I was a bit disappointed, though Coleman has opted for a more readable style. In fact, this is the book’s best asset. However, while containing a number of insights of interest to the general reader, the book ultimately falters as an assessment of Anonymous’s political orientation, or the state of hacker politics in general.

    Coleman begins with a discussion of online trolling, a common antagonistic online cultural practice; many Anons cut their troll teeth at the notorious 4chan message board. Trolling aims to create “lulz,” a kind of digital schadenfreude produced by pranks, insults and misrepresentations. According to Coleman, the lulz are “a form of cultural differentiation and a tool or weapon used to attack, humiliate, and defame” rooted in the use of “inside jokes” of those steeped in the codes of Internet culture (32). Coleman argues that the lulz has a deeper significance: they “puncture the consensus around our politics and ethics, our social lives and our aesthetic sensibilities.” But trolling can be better understood through an offline frame of reference: hazing. Trolling is a means by which geeks have historically policed the boundaries of the subcultural corners of the Internet. If you can survive the epithets and obscene pictures, you might be able to hang. That trolling often takes the form of misogynist, racist and homophobic language is unsurprising: early Net culture was predominantly white and male, a demographic fact which overdetermines the shape of resentment towards “newbies” (or in 4chan’s unapologetically offensive argot, “newfags”). The lulz is joy that builds community, but almost always at someone else’s expense.

    Coleman, drawing upon her background as an anthropologist, conceptualizes the troll as an instantiation of the trickster archetype which recurs throughout mythology and folklore. Tricksters, she argues, like trolls and Anonymous, are liminal figures who defy norms and revel in causing chaos. This kind of application of theory is a common technique in cultural studies, where seemingly apolitical or even anti-social transgressions, like punk rock or skateboarding, can be politicized with a dash of Bakhtin or de Certeau. Here it creates difficulties. There is one major difference between the spider spirit Anansi and Coleman’s main informant on trolling, the white supremacist hacker weev: Anansi is fictional, while weev is a real person who writes op-eds for neo-Nazi websites. The trickster archetype, a concept crafted for comparative structural analysis of mythology, does little to explain the actually existing social practice of trolling. Instead it renders it more complicated, ambiguous, and uncertain. These difficulties are compounded as the analysis moves to Anonymous. Anonymous doesn’t merely enact a submerged politics via style or symbols. It engages in explicitly political projects, complete with manifestos, though Coleman continues to return to transgression as one of its salient features.

    The trolls of 4chan, from which Anonymous emerged, developed a culture of compulsory anonymity. In part, this was technological: unlike other message boards and social media, posting on 4chan requires no lasting profile, no consistent presence. But there was also a cultural element to this. Identifying oneself is strongly discouraged in the community. Fittingly, its major trolling weapon is doxing: revealing personal information to facilitate further harassment offline (prank calls, death threats, embarrassment in front of employers). As Whitney Phillips argues, online trolling often acts as a kind of media critique: by enforcing anonymity and rejecting fame or notoriety, Anons oppose the now-dominant dynamics of social media and personal branding which have colonized much of the web, and threaten their cherished subcultural practices, which are more adequately enshrined in formats such as image boards and IRC. In this way, Anonymous deploys technological means to thwart the dominant social practices of technology, a kind of wired Luddism. Such practices proliferate in the communities of the computer underground, which is steeped in an omnipresent prelapsarian nostalgia since at least the “eternal September” of the early 1990s.

    HHWS’s overarching narrative is the emergence of Anonymous out of the cesspits of 4chan and into political consciousness: trolling for justice instead of lulz. The compulsory anonymity of 4chan, in part, determined Anonymous’s organizational form: Anonymous lacks formal membership, instead formed from entirely ad hoc affiliations. The brand itself can be selectively deployed or disavowed, leading to much argumentation and confusion. Coleman provides an insider perspective on how actions are launched: there is debate, occasionally a rough consensus, and then activity, though several times individuals opt to begin an action, dragging along a number of other participants of varying degrees of reluctance. Tactics are formalized in an experimental, impromptu way. In this, I recognized the way actions formed in the Occupy encampments. Anonymous, as Coleman shows, was an early Occupy Wall Street booster, and her analysis highlights the connection between the Occupy form and the networked forms of sociality exemplified by Anonymous. After reading Coleman’s account, I am much more convinced of Anonymous’s importance to the movement. Likewise, many criticisms of Occupy could also be levelled at Anonymous; Coleman cites Jo Freeman’s “The Tyranny of Structurelessness” as one candidate.

    If Anonymous can be said to have a coherent political vision, it is one rooted in civil liberties, particularly freedom of speech and opposition censorship efforts. Indeed, Coleman earns the trust of several hackers by her affiliation with the Electronic Frontier Foundation, nominally the digital equivalent to the ACLU (though some object to this parallel, due in part to EFF’s strong ties to industry). Geek politics, from Anonymous to Wikileaks to the Pirate Bay, are a weaponized form of the mantra “information wants to be free.” Anonymous’s causes seem fit these concerns perfectly: Scientology’s litigious means of protecting its secrets provoked its wrath, as did the voluntary withdrawal of services to Wikileaks by PayPal and Mastercard, and the Bay Area Rapid Transit police’s blacking out of cell phone signals to scuttle a protest.

    I’ve referred to Anonymous as geeks rather than hackers deliberately. Hackers — skilled individuals who can break into protected systems — participate in Anonymous, but many of the Anons pulled from 4chan are merely pranksters with above-average knowledge of the Internet and computing. This gets the organization in quite a bit of trouble when it engages in the political tactic of most interest to Coleman, the distributed denial of service (DDoS) attack. A DDoS floods a website with requests, overwhelming its servers. This technique has captured the imaginations of a number of scholars, including Coleman, with its resemblance to offline direct action like pickets and occupations. However, the AnonOps organizers falsely claimed that their DDoS app, the Low-Orbit Ion Cannon, ensured user anonymity, leading to a number of Anons facing serious criminal charges. Coleman curiously places the blame for this startling breach of operational security on journalists writing about AnonOps, rather on the organizers themselves. Furthermore, many DDoS attacks, including those launched by Anonymous, have relied on botnets, which draw power from hundreds of hijacked computers, bears little resemblance to any kind of democratic initiative. Of course, this isn’t to say that the harsh punishments meted out to Anons under the auspices of the Computer Fraud and Abuse Act are warranted, but that political tactics must be subjected to scrutiny.

    Coleman argues that Anonymous outgrew its narrow civil libertarian agenda with its involvement in the Arab Spring: “No longer was the group bound to Internet-y issues like censorship and file-sharing” (148). However, by her own account, it is opposition to censorship which truly animates the group. The #OpTunisia manifesto (Anonymous names its actions with the prefix “Op,” for operations, along with the ubiquitous Twitter-based hashtag) states plainly, “Any organization involved in censorship will be targeted” (ibid). Anons were especially animated by the complete shut-off of the Internet in Tunisia and Egypt, actions which shattered the notion of the Internet as a space controlled by geeks, not governments. Anonymous operations launched against corporations did not oppose capitalist exploitation but fought corporate restrictions on online conduct. These are laudable goals, but also limited ones, and are often compatible with Silicon Valley companies, as illustrated by the Google-friendly anti-SOPA/PIPA protests.

    Coleman is eager to distance Anonymous from the libertarian philosophies rife in geek and hacker circles, but its politics are rarely incompatible with such a perspective. The most recent Guy Fawkes Day protest I witnessed in Washington, D.C., full of mask-wearing Anons, displayed a number of slogans emerging from the Ron Paul camp, “End the Fed” prominent among them. There is no accounting for this in HHWS. It is clear that political differences among Anons exists, and that any analysis must be nuanced. But Coleman’s description of this nuance ultimately doesn’t delineate the political positions within the group and how they coalesce, opting to elide these differences in favor of a more protean focus on “transgression.” In this way, she is able to provide a conceptual coherence for Anonymous, albeit at the expense of a detailed examination of the actual politics of its members. In the final analysis, “Anonymous became a generalized symbol for dissent, a medium to channel deep disenchantment… basically, with anything” (399).

    As political concerns overtake the lulz, Anonymous wavers as smaller militant hacker crews LulzSec and AntiSec take the fore, doxing white hat security executives, leaking documents, and defacing websites. This frustrates Coleman: “Anonymous had been exciting to me for a specific reason: it was the largest and most populist disruptive grassroots movement the Internet had, up to that time, fomented. But it felt, suddenly like AnonOps/Anonymous was slipping into a more familiar state of hacker-vanguardism” (302). Yet it is at this moment that Coleman offers a revealing account of hacker ideology: its alignment with the philosophy of Friedrich Nietzsche. From 4chan’s trolls scoffing at morality and decency, to hackers disregarding technical and legal restraints to accessing information, to the collective’s general rejection any standard form of accountability, Anonymous truly seems to posit itself as beyond good and evil. Coleman herself confesses to being “overtly romantic” as she supplies alibis for the group’s moral and strategic failures (it is, after all, incredibly difficult for an ethnographer to criticize her informants). But Nietzsche was a profoundly undemocratic thinker, whose avowed elitism should cast more of a disturbing shadow over the progressive potentials behind hacker groups than it does for Coleman, who embraces the ability of hackers to “cast off — at least momentarily — the shackles of normativity and attain greatness” (275). Coleman’s previous work on free software programmers convincingly makes the case for a Nietzschean current running through hacker culture; I am considerably more skeptical than she is about the liberal democratic viewpoint this engenders.

    Ultimately, Coleman concludes that Anonymous cannot work as a substitute for existing organizations, but that its tactics should be taken up by other political formations: “The urgent question is how to promote cross-pollination” between Anonymous and more formalized structures (374). This may be warranted, but there needs to be a fuller accounting of the drawbacks to Anonymous. Because anyone can fly its flag, and because its actions are guided by talented and charismatic individuals working in secret, Anonymous is ripe for infiltration. Historically, hackers have proven to be easy for law enforcement and corporations to co-opt, not the least because of the ferocious rivalries amongst hackers themselves. Tactics are also ambiguous. A DDoS can be used by anti-corporate activists, or by corporations against their rivals and enemies. Document dumps can ruin a diplomatic initiative, or a woman’s social life. Public square occupations can be used to advocate for democracy, or as a platform for anti-democratic coups. Currently, a lot of the same geek energy behind Anonymous has been devoted to the misogynist vendetta GamerGate (in a Reddit AMA, Coleman adopted a diplomatic tone, referring to GamerGate as “a damn Gordian knot”). Without a steady sense of Anonymous’s actual political commitments, outside of free speech, it is difficult to do much more than marvel at the novelty of their media presence (which wears thinner with each overwrought communique). With Hoaxer, Hacker, Whistleblower, Spy, Coleman has offered a readable account of recent hacker history, but I remain unconvinced of Anonymous’s political potential.

    _____

    Gavin Mueller (@gavinsaywhat) is a PhD candidate in cultural studies at George Mason University, and an editor at Jacobin and Viewpoint Magazine.

    Back to the essay

  • Is the Network a Brain?

    Is the Network a Brain?

    Pickering, Cybernetic Braina review of Andrew Pickering, The Cybernetic Brain: Sketches of Another Future (University of Chicago Press, 2011)
    by Jonathan Goodwin
    ~

    Evgeny Morozov’s recent New Yorker article about Project Cybersyn in Allende’s Chile caused some controversy when critics accused Morozov of not fully acknowledging his sources. One of those sources was sociologist of science Andrew Pickering’s The Cybernetic Brain. Morozov is quoted as finding Pickering’s book “awful.” It’s unlikely that Morozov meant “awful” in the sense of “awe-inspiring,” but that was closer to my reaction after reading Pickering’s 500+ pp. work on the British tradition in cybernetics. This tradition was less militarist and more artistic, among other qualities, in Pickering’s account, than is popularly understood. I found myself greatly intrigued—if not awed—by the alternate future that his subtitle and final chapter announces. Cybernetics is now a largely forgotten dead-end in science. And the British tradition that Pickering describes had relatively little influence within cybernetics itself. So what is important about it now, and what is the nature of this other future that Pickering sketches?

    The major figures of this book, which proceeds with overviews of their careers, views, and accomplishments, are Grey Walter, Ross Ashby, Gregory Bateson, R. D. Laing, Stafford Beer, and Gordon Pask. Stuart Kauffman’s and Stephen Wolfram’s work on complexity theory also makes an appearance.[1] Laing and Bateson’s relevance may not be immediately clear. Pickering’s interest in them derives from their extension of cybernetic ideas to the emerging technologies of the self in the 1960s. Both Bateson and Laing approached schizophrenia as an adaptation to the increasing “double-binds” of Western culture, and both looked to Eastern spiritual traditions and chemical methods of consciousness-alteration as potential treatments. The Bateson and Laing material makes the most direct reference to the connection between the cybernetic tradition and the “Californian Ideology” that animates much Silicon Valley thinking. Stewart Brand was influenced by Bateson’s Steps to an Ecology of Mind (183), for example. Pickering identifies Northern California as the site where cybernetics migrated into the counterculture. As a technology of control, it is arguable that this countercultural migration has become part of the ruling ideology of the present moment. Pickering recognizes this but seems to concede that the inherent topicality would detract from the focus of his work. It is a facet that would be of interest to the readers of this “Digital Studies” section of The b2 Review, however, and I will thus return to it at the end of this review.

    Pickering’s path to Bateson and Laing originates with Grey Walter’s and Ross Ashby’s pursuit of cybernetic models of the brain. Computational models of the brain, though originally informed by cybernetic research, quickly replaced it in Pickering’s account (62). He asks why computational models of the brain quickly gathered so much cultural interest. Rodney Brooks’s robots, with their more embodied approach, Pickering argues, are in the tradition of Walter’s tortoises and outside the symbolic tradition of artificial intelligence. I find it noteworthy that the neurological underpinnings of early cybernetics were so strongly influenced by behaviorism. Computationalist approaches, associated by Pickering with the establishment or “royal” science, here, were intellectually formed by an attack on behaviorism. Pickering even addresses this point obliquely, when he wonders why literary scholars had not noticed that the octopus in Gravity’s Rainbow was apparently named “Grigori” in homage to Gregory Bateson (439n13).[2] I think one reason this hasn’t been noticed is that it’s much more likely that the name was random but for its Slavic form, which is clearly in the same pattern of references to Russian behaviorist psychology that informs Pynchon’s novel. An offshoot of behaviorism inspiring a countercultural movement devoted to freedom and experimentation seems peculiar.

    One of Pickering’s key insights into this alternate tradition of cybernetics is that its science is performative. Rather than being as theory-laden as are the strictly computationalist approaches, cybernetic science often studied complex systems as assemblages whose interactions generated novel insights. Contrast this epistemology to what critics point to as the frequent invocation of the Duhem-Quine thesis by Noam Chomsky.[3] For Pickering, Ross Ashby’s version of cybernetics was a “supremely general and protean science” (147). As it developed, the brain lost its central place and cybernetics became a “freestanding general science” (147). As I mentioned, the chapter on Ashby closes with a consideration of the complexity science of Stuart Kauffman and Stephen Wolfram. That Kauffman and Wolfram largely have worked outside mainstream academic institutions is important for Pickering.[4] Christopher Alexander’s pattern language in architecture is a third example. Pickering mentions that Alexander’s concept was influential in some areas of computer science; the notion of “object-oriented programming” is sometimes considered to have been influenced by Alexander’s ideas.

    I mention this connection because many of the alternate traditions in cybernetics have become mainstream influences in contemporary digital culture. It is difficult to imagine Laing and Bateson’s alternative therapeutic ideas having any resonance in that culture, however. The doctrine that “selves are endlessly complex and endlessly explorable” (211) is sometimes proposed as something the internet facilitates, but the inevitable result of anonymity and pseudonymity in internet discourse is the enframing of hierarchical relations. I realize this point may sound controversial to those with a more benign or optimistic view of digital culture. That this countercultural strand of cybernetic practice has clear parallels with much digital libertarian rhetoric is hard to dispute. Again, Pickering is not concerned in the book with tracing these contemporary parallels. I mention them because of my own interest and this venue’s presumed interest in the subject.

    The progression that begins with some variety of conventional rationalism, extends through a career in cybernetics, and ends in some variety of mysticism is seen with almost all of the figures that Pickering profiles in The Cybernetic Brain. Perhaps the clearest example—and most fascinating in general—is that of Stafford Beer. Philip Mirowski’s review of Pickering’s book refers to Beer as “a slightly wackier Herbert Simon.” Pickering enjoys recounting the adventures of the wizard of Prang, a work that Beer composed after he had moved to a remote Welsh village and renounced many of the world’s pleasures. Beer’s involvement in Project Cybersyn makes him perhaps the most well-known of the figures profiled in this book.[5] What perhaps fascinate Pickering more than anything else in Beer’s work is the concept of viability. From early in his career, Beer advocated for upwardly viable management strategies. The firm would not need a brain, in his model, “it would react to changing circumstances; it would grow and evolve like an organism or species, all without any human intervention at all” (225). Mirowski’s review compares Beer to Friedrich Hayek and accuses Pickering of refusing to engage with this seemingly obvious intellectual affinity.[6] Beer’s intuitions in this area led him to experiment with biological and ecological computing; Pickering surmises that Douglas Adams’s superintelligent mice derived from Beer’s murine experiments in this area (241).

    In a review of a recent translation of Stanislaw Lem’s Summa Technologiae, Pickering mentions that natural adaptive systems being like brains and being able to be utilized for intelligence amplification is the most “amazing idea in the history of cybernetics” (247).[7] Despite its association with the dreaded “synergy” (the original “syn” of Project Cybersyn), Beer’s viable system model never became a management fad (256). Alexander Galloway has recently written here about the “reticular fallacy,” the notion that de-centralized forms of organization are necessarily less repressive than are centralized or hierachical forms. Beer’s viable system model proposes an emergent and non-hierarchical management system that would increase the general “eudemony” (general well-being, another of Beer’s not-quite original neologisms [272]). Beer’s turn towards Tantric mysticism seems somehow inevitable in Pickering’s narrative of his career. The syntegric icosahedron, one of Beer’s late baroque flourishes, reminded me quite a bit of a Paul Laffoley painting. Syntegration as a concept takes reticularity to a level of mysticism rarely achieved by digital utopians. Pickering concludes the chapter on Beer with a discussion of his influence on Brian Eno’s ambient music.

    Laffoley, "The Orgone Motor"
    Paul Laffoley, “The Orgone Motor” (1981). Image source: paullaffoley.net.

    The discussion of Eno chides him for not reading Gordon Pask’s explicitly aesthetic cybernetics (308). Pask is the final cybernetician of Pickering’s study and perhaps the most eccentric. Pickering describes him as a model for Patrick Troughton’s Dr. Who (475n3), and his synaesthetic work in cybernetics with projects like the Musicolor are explicitly theatrical. A theatrical performance that directly incorporates audience feedback into the production, not just at the level of applause or hiss, but in audience interest in a particular character—a kind of choose-your-own adventure theater—was planned with Joan Littlewood (348-49). Pask’s work in interface design has been identified as an influence on hypertext (464n17). A great deal of the chapter on Pask involves his influence on British countercultural arts and architecture movements in the 1960s. Mirowski’s review shortly notes that even the anti-establishment Gordon Pask was funded by the Office of Naval Research for fifteen years (194). Mirowski also accuses Pickering of ignoring the computer as the emblematic cultural artifact of the cybernetic worldview (195). Pask is the strongest example offered of an alternate future of computation and social organization, but it is difficult to imagine his cybernetic present.

    The final chapter of Pickering’s book is entitled “Sketches of Another Future.” What is called “maker culture” combined with the “internet of things” might lead some prognosticators to imagine an increasingly cybernetic digital future. Cybernetic, that is, not in the sense of increasing what Mirowski refers to as the neoliberal “background noise of modern culture” but as a “challenge to the hegemony of modernity” (393). Before reading Pickering’s book, I would have regarded such a prediction with skepticism. I still do, but Pickering has argued that an alternate—and more optimistic—perspective is worth taking seriously.

    _____

    Jonathan Goodwin is Associate Professor of English at the University of Louisiana, Lafayette. He is working on a book about cultural representations of statistics and probability in the twentieth century.

    Back to the essay

    _____

    [1] Wolfram was born in England, though he has lived in the United States since the 1970s. Pickering taught at the University of Illinois while this book was being written, and he mentions having several interviews with Wolfram, whose company Wolfram Research is based in Champaign, Illinois (457n73). Pickering’s discussion of Wolfram’s A New Kind of Science is largely neutral; for a more skeptical view, see Cosma Shalizi’s review.

    [2] Bateson experimented with octopuses, as Pickering describes. Whether Pynchon knew about this, however, remains doubtful. Pickering’s note may also be somewhat facetious.

    [3] See the interview with George Lakoff in Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates, ed. Geoffrey J. Huck and John A. Goldsmith (New York: Routledge, 1995), p. 115. Lakoff’s account of Chomsky’s philosophical justification for his linguistic theories is tendentious; I mention it here because of the strong contrast, even in caricature, with the performative quality of the cybernetic research Pickering describes. (1999).

    [4] Though it is difficult to think of the Santa Fe Institute this way now.

    [5] For a detailed cultural history of Project Cybersyn, see Eden Medina, Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile (MIT Press, 2011). Medina notes that Beer formed the word “algedonic” from two words meaning “pain” and “pleasure,” but the OED notes an example in the same sense from 1894. This citation does not rule out independent coinage, of course. Curiously enough, John Fowles uses the term in The Magus (1966), where it could have easily been derived from Beer.

    [6] Hayek’s name appears neither in the index nor the reference list. It does seem a curious omission in the broader intellectual context of cybernetics.

    [7] Though there is a reference to Lem’s fiction in an endnote (427n25), Summa Technologiae, a visionary exploration of cybernetic philosophy dating from the early 1960s, does not appear in Pickering’s work. A complete English translation only recently appeared, and I know of no evidence that Pickering’s principal figures were influenced by Lem at all. The book, as Pickering’s review acknowledges, is astonishingly prescient and highly recommended for anyone interested in the culture of cybernetics.

  • Network Pessimism

    Network Pessimism

    By Alexander R. Galloway
    ~

    I’ve been thinking a lot about pessimism recently. Eugene Thacker has been deep in this material for some time already. In fact he has a new, lengthy manuscript on pessimism called Infinite Resignation, which is a bit of departure from his other books in terms of tone and structure. I’ve read it and it’s excellent. Definitely “the worst” he’s ever written! Following the style of other treatises from the history of philosophical pessimism–Leopardi, Cioran, Schopenhauer, Kierkegaard, and others–the bulk of the book is written in short aphorisms. It’s very poetic language, and some sections are driven by his own memories and meditations, all in an attempt to plumb the deepest, darkest corners of the worst the universe has to offer.

    Meanwhile, the worst can’t stay hidden. Pessimism has made it to prime time, to NPR, and even right-wing media. Despite all this attention, Eugene seems to have little interest in showing his manuscript to publishers. A true pessimist! Not to worry, I’m sure the book will see the light of day eventually. Or should I say dead of night? When it does, the book is sure to sadden, discourage, and generally worsen the lives of Thacker fans everywhere.

    Interestingly pessimism also appears in a number of other authors and fields. I’m thinking, for instance, of critical race theory and the concept of Afro-pessimism. The work of Fred Moten and Frank B. Wilderson, III is particularly interesting in that regard. Likewise queer theory has often wrestled with pessimism, be it the “no future” debates around reproductive futurity, or what Anna Conlan has simply labeled “homo-pessimism,” that is, the way in which the “persistent association of homosexuality with death and oppression contributes to a negative stereotype of LGBTQ lives as unhappy and unhealthy.”[1]

    In his review of my new book, Andrew Culp made reference to how some of this material has influenced me. I’ll be posting more on Moten and these other themes in the future, but let me here describe, in very general terms, how the concept of pessimism might apply to contemporary digital media.

    *

    A previous post was devoted to the reticular fallacy, defined as the false assumption that the erosion of hierarchical organization leads to an erosion of organization as such. Here I’d like to address the related question of reticular pessimism or, more simply, network pessimism.

    Network pessimism relies on two basic assumptions: (1) “everything is a network”; (2) “the best response to networks is more networks.”

    Who says everything is a network? Everyone, it seems. In philosophy, Bruno Latour: ontology is a network. In literary studies, Franco Moretti: Hamlet is a network. In the military, Donald Rumsfeld: the battlefield is a network. (But so too our enemies are networks: the terror network.) Art, architecture, managerial literature, computer science, neuroscience, and many other fields–all have shifted prominently in recent years toward a network model. Most important, however, is the contemporary economy and the mode of production. Today’s most advanced companies are essentially network companies. Google monetizes the shape of networks (in part via clustering algorithms). Facebook has rewritten subjectivity and social interaction along the lines of canalized and discretized network services. The list goes on and on. Thus I characterize the first assumption — “everything is a network” — as a kind of network fundamentalism. It claims that whatever exists in the world appears naturally in the form of a system, an ecology, an assemblage, in short, as a network.

    Ladies and gentlemen, behold the good news, postmodernism is definitively over! We have a new grand récit. As metanarrative, the network will guide us into a new Dark Age.

    If the first assumption expresses a positive dogma or creed, the second is more negative or nihilistic. The second assumption — that the best response to networks is more networks — is also evident in all manner of social and political life today. Eugene and I described this phenomena at greater length in The Exploit, but consider a few different examples from contemporary debates… In military theory: network-centric warfare is the best response to terror networks. In Deleuzian philosophy: the rhizome is the best response to schizophrenic multiplicity. In autonomist Marxism: the multitude is the best response to empire. In the environmental movement: ecologies and systems are the best response to the systemic colonization of nature. In computer science: distributed architectures are the best response to bottlenecks in connectivity. In economics: heterogenous “economies of scope” are the best response to the distributed nature of the “long tail.”

    To be sure, there are many sites today where networks still confront power centers. The point is not to deny the continuing existence of massified, centralized sovereignty. But at the same time it’s important to contextualize such confrontations within a larger ideological structure, one that inoculates the network form and recasts it as the exclusive site of liberation, deviation, political maturation, complex thinking, and indeed the very living of life itself.

    Why label this a pessimism? For the same reasons that queer theory and critical race theory are grappling with pessimism: Is alterity a death sentence? Is this as good as it gets? Is this all there is? Can we imagine a parallel universe different from this one? (Although the pro-pessimism camp would likely state it in the reverse: We must destabilize and annihilate all normative descriptions of the “good.” This world isn’t good, and hooray for that!)

    So what’s the problem? Why should we be concerned about network pessimism? Let me state clearly so there’s no misunderstanding, pessimism isn’t the problem here. Likewise, networks are not the problem. (Let no one label me “anti network” nor “anti pessimism” — in fact I’m not even sure what either of those positions would mean.) The issue, as I see it, is that network pessimism deploys and sustains a specific dogma, confining both networks and pessimism to a single, narrow ideological position. It’s this narrow-mindedness that should be questioned.

    Specifically I can see three basic problems with network pessimism, the problem of presentism, the problem of ideology, and the problem of the event.

    The problem of presentism refers to the way in which networks and network thinking are, by design, allergic to historicization. This exhibits itself in a number of different ways. Networks arrive on the scene at the proverbial “end of history” (and they do so precisely because they help end this history). Ecological and systems-oriented thinking, while admittedly always temporal by nature, gained popularity as a kind of solution to the problems of diachrony. Space and landscape take the place of time and history. As Fredric Jameson has noted, the “spatial turn” of postmodernity goes hand in hand with a denigration of the “temporal moment” of previous intellectual movements.

    man machines buy fritz kahn
    Fritz Kahn, “Der Mensch als Industriepalast (Man as Industrial Palace)” (Stuttgart, 1926). Image source: NIH

    From Hegel’s history to Luhmann’s systems. From Einstein’s general relativity to Riemann’s complex surfaces. From phenomenology to assemblage theory. From the “time image” of cinema to the “database image” of the internet. From the old mantra always historicize to the new mantra always connect.

    During the age of clockwork, the universe was thought to be a huge mechanism, with the heavens rotating according to the music of the spheres. When the steam engine was the source of newfound power, the world suddenly became a dynamo of untold thermodynamic force. After full-fledged industrialization, the body became a factory. Technologies and infrastructures are seductive metaphors. So it’s no surprise (and no coincidence) that today, in the age of the network, a new template imprints itself on everything in sight. In other words, the assumption “everything is a network” gradually falls apart into a kind of tautology of presentism. “Everything right now is a network…because everything right now has been already defined as a network.”

    This leads to the problem of ideology. Again we’re faced with an existential challenge, because network technologies were largely invented as a non-ideological or extra-ideological structure. When writing Protocol I interviewed some of the computer scientists responsible for the basic internet protocols and most of them reported that they “have no ideology” when designing networks, that they are merely interested in “code that works” and “systems that are efficient and robust.” In sociology and philosophy of science, figures like Bruno Latour routinely describe their work as “post-critical,” merely focused on the direct mechanisms of network organization. Hence ideology as a problem to be forgotten or subsumed: networks are specifically conceived and designed as those things that both are non-ideological in their conception (we just want to “get things done”), but also post-ideological in their architecture (in that they acknowledge and co-opt the very terms of previous ideological debates, things like heterogeneity, difference, agency, and subject formation).

    The problem of the event indicates a crisis for the very concept of events themselves. Here the work of Alain Badiou is invaluable. Network architectures are the perfect instantiation of what Badiou derisively labels “democratic materialism,” that is, a world in which there are “only bodies and languages.” In Badiou’s terms, if networks are the natural state of the situation and there is no way to deviate from nature, then there is no event, and hence no possibility for truth. Networks appear, then, as the consummate “being without event.”

    What could be worse? If networks are designed to accommodate massive levels of contingency — as with the famous Robustness Principle — then they are also exceptionally adept at warding off “uncontrollable” change wherever it might arise. If everything is a network, then there’s no escape, there’s no possibility for the event.

    Jameson writes as much in The Seeds of Time when he says that it is easier to imagine the end of the earth and the end of nature than it is to imagine the ends of capitalism. Network pessimism, in other words, is really a kind of network defeatism in that it makes networks the alpha and omega of our world. It’s easier to imagine the end of that world than it is to discard the network metaphor and imagine a kind of non-world in which networks are no longer dominant.

    In sum, we shouldn’t give in to network pessimism. We shouldn’t subscribe to the strong claim that everything is a network. (Nor should we subscribe to the softer claim, that networks are merely the most common, popular, or natural architecture for today’s world.) Further, we shouldn’t think that networks are the best response to networks. Instead we must ask the hard questions. What is the political fate of networks? Did heterogeneity and systematicity survive the Twentieth Century? If so, at what cost? What would a non-net look like? And does thinking have a future without the network as guide?

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay
    _____

    Notes

    [1] Anna Conlan, “Representing Possibility: Mourning, Memorial, and Queer Museology,” in Gender, Sexuality and Museums, ed. Amy K. Levin (London: Routledge, 2010). 253-263.

  • Flat Theory

    Flat Theory

    By David M. Berry
    ~

    The world is flat.[1] 6Or perhaps better, the world is increasingly “layers.” Certainly the augmediated imaginaries of the major technology companies are now structured around a post-retina vision of mediation made possible and informed by the digital transformations ushered in by mobile technologies – whether smartphones, wearables, beacons or nearables – an internet of places and things. These imaginaries provide a sense of place, as well as sense for management, of the complex real-time streams of information and data broken into shards and fragments of narrative, visual culture, social media and messaging. Turned into software, they reorder and re-present information, decisions and judgment, amplifying the sense and senses of (neoliberal) individuality whilst reconfiguring what it means to be a node in the network of post digital capitalism.  These new imaginaries serve as abstractions of abstractions, ideologies of ideologies, a prosthesis to create a sense of coherence and intelligibility in highly particulate computational capitalism (Berry 2014). To explore the experimentation of the programming industries in relation to this it is useful to explore the design thinking and material abstractions that are becoming hegemonic at the level of the interface.

    Two new competing computational interface paradigms are now deployed in the latest version of Apple and Google’s operating systems, but more notably as regulatory structures to guide the design and strategy related to corporate policy. The first is “flat design” which has been introduced by Apple through iOS 8 and OS X Yosemite as a refresh of the aging operating systems’ human computer interface guidelines, essentially stripping the operating system of historical baggage related to techniques of design that disguised the limitations of a previous generation of technology, both in terms of screen but also processor capacity. It is important to note, however, that Apple avoids talking about “flat design” as its design methodology, preferring to talk through its platforms specificity, that is about iOS’ design or OS X’s design. The second is “material design” which was introduced by Google into its Android L, now Lollipop, operating system and which also sought to bring some sense of coherence to a multiplicity of Android devices, interfaces, OEMs and design strategies. More generally “flat design” is “the term given to the style of design in which elements lose any type of stylistic characters that make them appear as though they lift off the page” (Turner 2014). As Apple argues, one should “reconsider visual indicators of physicality and realism” and think of the user interface as “play[ing] a supporting role”, that is that techniques of mediation through the user interface should aim to provide a new kind of computational realism that presents “content” as ontologically prior to, or separate from its container in the interface (Apple 2014). This is in contrast to “rich design,” which has been described as “adding design ornaments such as bevels, reflections, drop shadows, and gradients” (Turner 2014).

    color_family_a_2xI want to explore these two main paradigms – and to a lesser extent the flat-design methodology represented in Windows 7 and the, since renamed, Metro interface – through a notion of a comprehensive attempt by both Apple and Google to produce a rich and diverse umwelt, or ecology, linked through what what Apple calls “aesthetic integrity” (Apple 2014). This is both a response to their growing landscape of devices, platforms, systems, apps and policies, but also to provide some sense of operational strategy in relation to computational imaginaries. Essentially, both approaches share an axiomatic approach to conceptualizing the building of a system of thought, in other words, a primitivist predisposition which draws from both a neo-Euclidian model of geons (for Apple), but also a notion of intrinsic value or neo-materialist formulations of essential characteristics (for Google). That is, they encapsulate a version of what I am calling here flat theory. Both of these companies are trying to deal with the problematic of multiplicities in computation, and the requirement that multiple data streams, notifications and practices have to be combined and managed within the limited geography of the screen. In other words, both approaches attempt to create what we might call aggregate interfaces by combining techniques of layout, montage and collage onto computational surfaces (Berry 2014: 70).

    The “flat turn” has not happened in a vacuum, however, and is the result of a new generation of computational hardware, smart silicon design and retina screen technologies. This was driven in large part by the mobile device revolution which has not only transformed the taken-for-granted assumptions of historical computer interface design paradigms (e.g. WIMP) but also the subject position of the user, particularly structured through the Xerox/Apple notion of single-click functional design of the interface. Indeed, one of the striking features of the new paradigm of flat design, is that it is a design philosophy about multiplicity and multi-event. The flat turn is therefore about modulation, not about enclosure, as such, indeed it is a truly processual form that constantly shifts and changes, and in many ways acts as a signpost for the future interfaces of real-time algorithmic and adaptive surfaces and experiences. The structure of control for the flat design interfaces is following that of the control society, is “short-term and [with] rapid rates of turnover, but also continuous and without limit” (Deleuze 1992). To paraphrase Deleuze: Humans are no longer in enclosures, certainly, but everywhere humans are in layers.

    manipulation_2x

    Apple uses a series of concepts to link its notion of flat design which include, aesthetic integrity, consistency, direct manipulation, feedback, metaphors, and user control (Apple 2014). Reinforcing the haptic experience of this new flat user interface has been described as building on the experience of “touching glass” to develop the “first post-Retina (Display) UI (user interface)” (Cava 2013). This is the notion of layered transparency, or better, layers of glass upon which the interface elements are painted through a logical internal structure of Z-axis layers. This laminate structure enables meaning to be conveyed through the organization of the Z-axis, both in terms of content, but also to place it within a process or the user interface system itself.

    Google, similarly, has reorganized it computational imaginary around a flattened layered paradigm of representation through the notion of material design. Matias Duarte, Google’s Vice President of Design and a Chilean computer interface designer, declared that this approach uses the notion that it “is a sufficiently advanced form of paper as to be indistinguishable from magic” (Bohn 2014). But magic which has constraints and affordances built into it, “if there were no constraints, it’s not design — it’s art” Google claims (see Interactive Material Design) (Bohn 2014). Indeed, Google argues that the “material metaphor is the unifying theory of a rationalized space and a system of motion”, further arguing:

    The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space and in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts… Motion respects and reinforces the user as the prime mover… [and together] They create hierarchy, meaning, and focus (Google 2014).

    This notion of materiality is a weird materiality in as much as Google “steadfastly refuse to name the new fictional material, a decision that simultaneously gives them more flexibility and adds a level of metaphysical mysticism to the substance. That’s also important because while this material follows some physical rules, it doesn’t create the “trap” of skeuomorphism. The material isn’t a one-to-one imitation of physical paper, but instead it’s ‘magical’” (Bohn 2014). Google emphasises this connection, arguing that “in material design, every pixel drawn by an application resides on a sheet of paper. Paper has a flat background color and can be sized to serve a variety of purposes. A typical layout is composed of multiple sheets of paper” (Google Layout, 2014). The stress on material affordances, paper for Google and glass for Apple are crucial to understanding their respective stances in relation to flat design philosophy.[2]

    • Glass (Apple): Translucency, transparency, opaqueness, limpidity and pellucidity.
    • Paper (Google): Opaque, cards, slides, surfaces, tangibility, texture, lighted, casting shadows.
    Paradigmatic Substances for Materiality

    In contrast to the layers of glass paper-notes-templatethat inform the logics of transparency, opaqueness and translucency of Apple’s flat design, Google uses the notion of remediated “paper” as a digital material, that is this “material environment is a 3D space, which means all objects have x, y, and z dimensions. The z-axis is perpendicularly aligned to the plane of the display, with the positive z-axis extending towards the viewer.  Every sheet of material occupies a single position along the z-axis and has a standard 1dp thickness” (Google 2014). One might think then of Apple as painting on layers of glass, and Google as thin paper objects (material) placed upon background paper. However a key difference lies in the use of light and shadow in Google’s notion which enables the light source, located in a similar position to the user of the interface, to cast shadows of the material objects onto the objects and sheets of paper that lie beneath them (see Jitkoff 2014). Nonetheless, a laminate structure is key to the representational grammar that constitutes both of these platforms.

    armin_hofmann_2
    Armin Hofmann, head of the graphic design department at the Schule für Gestaltung Basel (Basel School of Design) and was instrumental in developing the graphic design style known as the Swiss Style. Designs from 1958 and 1959.

    Interestingly, both design strategies emerge from an engagement with and reconfiguration of the principles of design that draw from the Swiss style (sometimes called the International Typographic Style) in design (Ashghar 2014, Turner 2014).[3] This approach emerged in the 1940s, and

    mainly focused on the use of grids, sans-serif typography, and clean hierarchy of content and layout. During the 40’s and 50’s, Swiss design often included a combination of a very large photograph with simple and minimal typography (Turner 2014).

    The design grammar of the Swiss style has been combined with minimalism and the principle of “responsive design”, that is that the materiality and specificity of the device should be responsive to the interface and context being displayed. Minimalism is a “term used in the 20th century, in particular from the 1960s, to describe a style characterized by an impersonal austerity, plain geometric configurations and industrially processed materials” (MoMA 2014).

    img-robert-morris-1_125225955286
    Robert Morris: Untitled (Scatter Piece), 1968-69, felt, steel, lead, zinc, copper, aluminum, brass, dimensions variable; at Leo Castelli Gallery, New York. Photo Genevieve Hanson. All works © 2010 Robert Morris/Artists Rights Society (ARS), New York.

    Robert Morris, one of the principle artists of Minimalism, and author of the influential Notes on Sculpture used “simple, regular and irregular polyhedrons. Influenced by theories in psychology and phenomenology” which he argued “established in the mind of the beholder ‘strong gestalt sensation’, whereby form and shape could be grasped intuitively” (MoMA 2014).[4]

    The implications of these two competing world-views are far-reaching in that much of the worlds initial contact, or touch points, for data services, real-time streams and computational power is increasingly through the platforms controlled by these two companies. However, they are also deeply influential across the programming industries, and we see alternatives and multiple reconfigurations in relation to the challenge raised by the “flattened” design paradigms. That is, they both represent, if only in potentia, a situation of a power relation and through this an ideological veneer on computation more generally. Further, with the proliferation of computational devices – and the screenic imaginary associated with them in the contemporary computational condition – there appears a new logic which lies behind, justifies and legitimates these design methodologies.

    It seems to me that these new flat design philosophies, in the broad sense, produce an order in precepts and concepts in order to give meaning and purpose not only in the interactions with computational platforms, but also more widely in terms of everyday life. Flat design and material design are competing philosophies that offer alternative patterns of both creation and interpretation, which are meant to have not only interface design implications, but more broadly in the ordering of concepts and ideas, the practices and the experience of computational technologies broadly conceived. Another way to put this could be to think about these moves as being a computational founding, the generation of, or argument for, an axial framework for building, reconfiguration and preservation.

    Indeed, flat design provides and more importantly serves, as a translational or metaphorical heuristic for both re-presenting the computational, but also teaches consumers and users how to use and manipulate new complex computational systems and stacks. In other words, in a striking visual technique flat design communicates the vertical structure of the computational stack, on which the Stack corporations are themselves constituted. But also begins to move beyond the specificity of the device as privileged site of a computational interface interaction from beginning to end. For example, interface techniques are abstracted away from the specificity of the device, for example through Apple’s “handoff” continuity framework which also potentially changes reading and writing practices in interesting ways and new use-cases for wearables and nearables.

    These new interface paradigms, introduced by the flat turn, have very interesting possibilities for the application of interface criticism, through unpacking and exploring the major trends and practices of the Stacks, that is, the major technology companies. I think that further than this, the notion of layers are instrumental in mediating the experience of an increasingly algorithmic society (e.g. think dashboards, personal information systems, quantified self, etc.), and as such provide an interpretative frame for a world of computational patterns but also a constituting grammar for building these systems in the first place. There is an element in which the notion of the postdigital may also be a useful way into thinking about the question of the link between art, computation and design given here (see Berry and Dieter, forthcoming) but also the importance of notions of materiality for the conceptualization deployed by designers working within both the flat design and material design paradigms – whether of paper, glass, or some other “material” substance.[5]
    _____

    David M. Berry is Reader in the School of Media, Film and Music at the University of Sussex. He writes widely on computation and the digital and blogs at Stunlaw. He is the author of Critical Theory and the Digital, The Philosophy of Software: Code and Mediation in the Digital Age , Copy, Rip, Burn: The Politics of Copyleft and Open Source, editor of Understanding Digital Humanities and co-editor of Postdigital Aesthetics: Art, Computation And Design. He is also a Director of the Sussex Humanities Lab.

    Back to the essay
    _____

    Notes

    [1] Many thanks to Michael Dieter and Søren Pold for the discussion which inspired this post.

    [2] The choice of paper and glass as the founding metaphors for the flat design philosophies of Google and Apple raise interesting questions for the way in which these companies articulate the remediation of other media forms, such as books, magazines, newspapers, music, television and film, etc. Indeed, the very idea of “publication” and the material carrier for the notion of publication is informed by the materiality, even if only a notional affordance given by this conceptualization. It would be interesting to see how the book is remediated through each of the design philosophies that inform both companies, for example.

    [3] One is struck by the posters produced in the Swiss style which date to the 1950s and 60s but which today remind one of the mobile device screens of the 21st Century.

    [4] There is also some interesting links to be explored between the Superflat style and postmodern art movement, founded by the artist Takashi Murakami, which is influenced by manga and anime, both in terms of the aesthetic but also in relation to the cultural moment in which “flatness” is linked to “shallow emptiness.”

    [5] There is some interesting work to be done in thinking about the non-visual aspects of flat theory, such as the increasing use of APIs, such as the RESTful api, but also sound interfaces that use “flat” sound to indicate spatiality in terms of interface or interaction design. There are also interesting implications for the design thinking implicit in the Apple Watch, and the Virtual Reality and Augmented Reality platforms of Oculus Rift, Microsoft HoloLens, Meta and Magic Leap.

    Bibliography