b2o

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Charles Bernstein–“Pitch of Poetry!”

    Charles Bernstein–“Pitch of Poetry!”

    I am happy to announce the publication of Pitch of Poetry, my new collection of essays from the University of Chicago Press. There will be launches for the book in Washington, DC (Bridge Street Books) on March 20, at Penn (Kelly Writers House) on April 12, and in New York (the Poetry Project) on April 20 (see below for details).

    Pitch of Poetry makes the case for echopoetics: a poetry of call and response
    , reason and imagination, disfiguration and refiguration.

    Publishers Weekly
    “Often elliptical, argumentative, and personal, this is a radical work about the nature of poetry and of language itself.”

    Library Journal
    “A strangely compelling amalgam of postulations, propositions, interviews, and opinions, this collection from Bernstein is as much a work of art as a work of criticism.”

    Craig Dworkin
    “The traits and energies that made Bernstein, the foremost poet-critic of our time, a leading figure of the 1980s-era avant-garde have continued unabated.”

    Pierre Joris
    Pitch of Poetry is wide-ranging, protean, exhilarating.”

    Subjects range across the figurative nature of abstract art, Occupy Wall Street, and Shoah representation. Detailed overviews of formally inventive work include essays on—or “pitches” for—a set of key poets, from Gertrude Stein and Robert Creeley to John Ashbery, Barbara Guest, Larry Eigner, Leslie Scalapino, Maggie O’Sullivan, and Johanna Drucker. Bernstein also reveals the formative ideas behind L=A=N=G=U=A=G=E. The final section, published here for the first time, is a sweeping work on the poetics of stigma, perversity, and disability that is rooted in the thinking of Edgar Allan Poe, Emily Dickinson, Ralph Waldo Emerson, and William Blake.

    BOOK LAUNCHES (readings and signings):
    Bridge Street Books, Weds., March 30, 7:30pm (2814 Pennsylvania Ave NW, Washington, DC 20007):
    Kelly Writers House, University of Pennsylvania, Tues., April 12, 6pm
    Poetry Project, St. Mark’s Church,  New York: Weds., April 20, 8pm

    Information on ordering the book: University of Chicago Press page. The publication date is the first day of Spring, but the book is just now available.  Book cover image: © Lawrence Schwartzwald

  • Announcing Our Winter Issue: Econophonia: Music, Value, and Forms of Life

    Announcing Our Winter Issue: Econophonia: Music, Value, and Forms of Life

    b2_43_1_Cover_r3 - Option 1

    This issue theorizes what questions of value might contribute to our understanding of sound and music. Divesting sound and music from notions of intrinsic value, the contributors follow various avenues through which sound and music produce value in and as history, politics, ethics, epistemology, and ontology. As a result, the very question of what sound and music are—what constitutes them, as well as what they constitute—is at stake. Contributors examine the politics of music and crowds, the metaphysics of sensation, the ecological turn in music studies, and the political resistance inherent to sound; connect Karl Marx to black music and slave labor; look at Marx, the Marx Brothers, and fetishism; and explore the tension between the voice of the Worker who confronts Capital head-on and the voices of actual workers.

    Contributors include Amy Cimini, Bill Dietz, Jairo Moreno, Rosalind Morris, Ana María Ochoa Gautier, Ronald Radano, Gavin Steingo, Peter Szendy, Gary Tomlinson, and Naomi Waltham-Smith.

    See the introduction by Gavin Steingo and Jairo Moreno.

  • Dissecting the “Internet Freedom” Agenda

    Dissecting the “Internet Freedom” Agenda

    Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedoma review of Shawn M. Powers and Michael Jablonski, The Real Cyber War: The Political Economy of Internet Freedom  (University of Illinois Press, 2015)
    by Richard Hill
    ~
    Disclosure: the author of this review is thanked in the Preface of the book under review.

    Both radical civil society organizations and mainstream defenders of the status quo agree that the free and open Internet is threatened: see for example the Delhi Declaration, Bob Hinden’s 2014 Year End Thoughts, and Kathy Brown’s March 2015 statement at a UNESCO conference. The threats include government censorship and mass surveillance, but also the failure of governments to control rampant industry concentration and commercial exploitation of personal data, which increasingly takes the form of providing “free” services in exchange for personal information that is resold at a profit, or used to provide targeted advertising, also at a profit.

    In Digital Disconnect, Robert McChesney has explained how the Internet, which was supposed to be a force for the improvement of human rights and living conditions, has been used to erode privacy and to increase the concentration of economic power, to the point where it is becoming a threat to democracy. In Digital Depression, Dan Schiller has documented how US policies regarding the Internet have favored its geo-economic and geo-political goals, in particular the interests of its large private companies that dominate the information and communications technology (ICT) sector worldwide.

    Shawn M. Powers and Michael Jablonski’s seminal new book The Real Cyber War takes us further down the road of understanding what went wrong, and what might be done to correct the situation. Powers, an assistant professor at Georgia State University, specializes in international political communication, with particular attention to the geopolitics of information and information technologies. Jablonski is an attorney and presidential fellow, also at Georgia State.

    There is a vast literature on internet governance (see for example the bibliography in Radu, Chenou, and Weber, eds., The Evolution of Global Internet Governance), but much of it is ideological and normative: the author espouses a certain point of view, explains why that point of view is good, and proposes actions that would lead to the author’s desired outcome (a good example is Milton Mueller’s well researched but utopian Networks and States). There is nothing wrong with that approach: on the contrary, such advocacy is necessary and welcome.

    But a more detached analytical approach is also needed, and Powers and Jablonski provide exactly that. Their objective is to help us understand (citing from p. 19 of the paperback edition) “why states pursue the policies they do”. The book “focuses centrally on understanding the numerous ways in which power and control are exerted in cyberspace” (p. 19).

    Starting from the rather obvious premise that states compete to shape international policies that favor their interests, and using the framework of political economy, the authors outline the geopolitical stakes and show how questions of power, and not human rights, are the real drivers of much of the debate about Internet governance. They show how the United States has deliberately used a human rights discourse to promote policies that further its geo-economic and geo-political interests. And how it has used subsidies and government contracts to help its private companies to acquire or maintain dominant positions in much of the ICT sector.

    Jacob Silverman has decried the “the misguided belief that once power is arrogated away from doddering governmental institutions, it will somehow find itself in the hands of ordinary people”. Powers and Jablonski dissect the mechanisms by which vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.

    In particular, they show how a “freedom to connect” narrative is used by the USA to attempt to transform information and personal data into commercial commodities that should be subject to free trade. Yet all states (including the US) regulate, at least to some extent, the flow of information within and across their borders. If information is the “new oil” of our times, then it is not surprising that states wish to shape the production and flow of information in ways that favor their interests. Thus it is not surprising that states such as China, India, and Russia have started to assert sovereign rights to control some aspect of the production and flow of information within their borders, and that European Union courts have made decisions on the basis of European law that affect global information flows and access.

    As the authors put the matter (p. 6): “the [US] doctrine of internet freedom … is the realization of a broader [US] strategy promoting a particular conception of networked communication that depends on American companies …, supports Western norms …, and promotes Western products.” (I would personally say that it actually supports US norms and US products and services.) As the authors point out, one can ask (p. 11): “If states have a right to control the types of people allowed into their territory (immigration), and how its money is exchanged with foreign banks, then why don’t they have a right to control information flows from foreign actors?”

    To be sure, any such controls would have to comply with international human rights law. But the current US policies go much further, implying that those human rights laws must be implemented in accordance with the US interpretation, meaning few restrictions on freedom of speech, weak protection of privacy, and ever stricter protection for intellectual property. As Powers and Jablonski point out (p. 31), the US does not hesitate to promote restrictions on information flows when that promotes its goals.

    Again, the authors do not make value judgments: they explain in Chapter 1 how the US deliberately attempts to shape (to a large extent successfully) international policies, so that both actions and inactions serve its interests and those of the large corporations that increasingly influence US policies.

    The authors then explain how the US military-industrial complex has morphed into an information-industrial complex, with deleterious consequences for both industry and government, consequences such as “weakened oversight, accountability, and industry vitality and competitiveness”(p. 23) that create risks for society and democracy. As the authors say, the shift “from adversarial to cooperative and laissez-faire rule making is a keystone moment in the rise of the information-industrial complex” (p. 61).

    As a specific example, they focus on Google, showing how it (largely successfully) aims to control and dominate all aspects of the data market, from production, through extraction, refinement, infrastructure and demand. A chapter is devoted to the economics of internet connectivity, showing how US internet policy is basically about getting the largest number of people online, so that US companies can extract ever greater profits from the resulting data flows. They show how the network effects, economies of scale, and externalities that are fundamental features of the internet favor first-movers, which are mostly US companies.

    The remedy to such situations is well known: government intervention: widely accepted regarding air transport, road transport, pharmaceuticals, etc., and yet unthinkable for many regarding the internet. But why? As the authors put the matter (p. 24): “While heavy-handed government controls over the internet should be resisted, so should a system whereby internet connectivity requires the systematic transfer of wealth from the developing world to the developed.” But freedom of information is put forward to justify specific economic practices which would not be easy to justify otherwise, for example “no government taxes companies for data extraction or for data imports/exports, both of which are heavily regulated aspects of markets exchanging other valuable commodities”(p. 97).

    The authors show in detail how the so-called internet multi-stakeholder model of governance is dominated by insiders and used “under the veil of consensus’” (p. 136) to further US policies and corporations. A chapter is devoted to explaining how all states control, at least to some extent, information flows within their territories, and presents detailed studies of how four states (China, Egypt, Iran and the USA) have addressed the challenges of maintaining political control while respecting (or not) freedom of speech. The authors then turn to the very current topic of mass surveillance, and its relation to anonymity, showing how, when the US presents the internet and “freedom to connect” as analogous to public speech and town halls, it is deliberately arguing against anonymity and against privacy – and this of course in order to avoid restrictions on its mass surveillance activities.

    Thus the authors posit that there are tensions between the US call for “internet freedom” and other states’ calls for “information sovereignty”, and analyze the 2012 World Conference on International Telecommunications from that point of view.

    Not surprisingly, the authors conclude that international cooperation, recognizing the legitimate aspirations of all the world’s peoples, is the only proper way forward. As the authors put the matter (p. 206): “Activists and defenders of the original vision of the Web as a ‘fair and humane’ cyber-civilization need to avoid lofty ‘internet freedom’ declarations and instead champion specific reforms required to protect the values and practices they hold dear.” And it is with that in mind, as a counterweight to US and US-based corporate power, that a group of civil society organizations have launched the Internet Social Forum.

    Anybody who is seriously interested in the evolution of internet governance and its impact on society and democracy will enjoy reading this well researched book and its clear exposition of key facts. One can only hope that the Council of Europe will heed Powers and Jablonski’s advice and avoid adopting more resolutions such as the recent recommendation to member states by the EU Committee of Ministers, which merely pander to the US discourse and US power that Powers and Jablonski describe so aptly. And one can fondly hope that this book will help to inspire a change in course that will restore the internet to what it might become (and what many thought it was supposed to be): an engine for democracy and social and economic progress, justice, and equity.
    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • A Dark, Warped Reflection

    A Dark, Warped Reflection

    Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )a review of Charlie Brooker, writer & producer, Black Mirror (BBC/Zeppotron, 2011- )
    by Zachary Loeb
    ~

    Depending upon which sections of the newspaper one reads, it is very easy to come away with two rather conflicting views of the future. If one begins the day by reading the headlines in the “International News” or “Environment” it is easy to feel overwhelmed by a sense of anxiety and impending doom; however, if one instead reads the sections devoted to “Business” or “Technology” it is easy to feel confident that there are brighter days ahead. We are promised that soon we shall live in wondrous “Smart” homes where all of our devices work together tirelessly to ensure our every need is met even while drones deliver our every desire even as we enjoy ever more immersive entertainment experiences with all of this providing plenty of wondrous investment opportunities…unless of course another economic collapse or climate change should spoil these fantasies. Though the juxtaposition between newspaper sections can be jarring an element of anxiety can generally be detected from one section to the next – even within the “technology” pages. After all, our devices may have filled our hours with apps and social networking sites, but this does not necessarily mean that they have left us more fulfilled. We have been supplied with all manner of answers, but this does not necessarily mean we had first asked any questions.

    [youtube https://www.youtube.com/watch?v=pimqGkBT6Ek&w=560&h=315]

    If you could remember everything, would you want to? If a cartoon bear lampooned the pointlessness of elections, would you vote for the bear? Would you participate in psychological torture, if the person being tortured was a criminal? What lengths would you turn to if you could not move-on from a loved one’s death? These are the types of questions posed by the British television program Black Mirror, wherein anxiety about the technologically riddled future, be it the far future or next week, is the core concern. The paranoid pessimism of this science-fiction anthology program is not a result of a fear of the other or of panic at the prospect of nuclear annihilation – but is instead shaped by nervousness at the way we have become strangers to ourselves. There are no alien invaders, occult phenomena, nor is there a suit wearing narrator who makes sure that the viewers understand the moral of each story. Instead what Black Mirror presents is dread – it holds up a “black mirror” (think of any electronic device when the power on the screen is off) to society and refuses to flinch at the reflection.

    Granted, this does not mean that those viewing the program will not flinch.

    [And Now A Brief Digression]

    Before this analysis goes any further it seems worthwhile to pause and make a few things clear. Firstly, and perhaps most importantly, the intention here is not to pass a definitive judgment on the quality of Black Mirror. While there are certainly arguments that can be made regarding how “this episode was better than that one” – this is not the concern here. Nor for that matter is the goal to scoff derisively at Black Mirror and simply dismiss of it – the episodes are well written, interestingly directed, and strongly acted. Indeed, that the program can lead to discussion and introspection is perhaps the highest praise that one can bestow upon a piece of widely disseminated popular culture. Secondly, and perhaps even more importantly (depending on your opinion), some of the episodes of Black Mirror rely upon twists and surprises in order to have their full impact upon the viewer. Oftentimes people find it highly frustrating to have these moments revealed to them ahead of time, and thus – in the name of fairness – let this serve as an official “spoiler warning.” The plots of each episode will not be discussed in minute detail in what follows – as the intent here is to consider broader themes and problems – but if you hate “spoilers” you should consider yourself warned.

    [Digression Ends]

    The problem posed by Black Mirror is that in building nervous narratives about the technological tomorrow the program winds up replicating many of the shortcomings of contemporary discussions around technology. Shortcomings that make such an unpleasant future seem all the more plausible. While Black Mirror may resist the obvious morality plays of a show like The Twilight Zone, the moral of the episodes may be far less oppositional than they at first seem. The program draws much of its emotional heft by narrowly focusing its stories upon specific individuals, but in so doing the show may function as a sort of precognitive “usage manual,” one that advises “if a day should arrive when you can technologically remember everything…don’t be like the guy in this episode.” The episodes of Black Mirror may call upon viewers to look askance at the future it portrays, but it also encourages the sort of droll inured acceptance that is characteristic of the people in each episode of the program. Black Mirror is a sleek, hip, piece of entertainment, another installment in the contemporary “golden age of television” wherein it risks becoming just another program that can be streamed onto any of a person’s black mirror like screens. The program is itself very much a part of the same culture industry of the YouTube and Twitter era that the show seems to vilify – it is ready made for “binge watching.” The program may be disturbing, but its indictments are soft – allowing viewers a distance that permits them to say aloud “I would never do that” even as they are subconsciously unsure.

    Thus, Black Mirror appears as a sort of tragic confirmation of the continuing validity of Jacques Ellul’s comment:

    “One cannot but marvel at an organization which provides the antidote as it distills the poison.” (Ellul, 378)

    For the tales that are spun out in horrifying (or at least discomforting) detail on Black Mirror may appear to be a salve for contemporary society’s technological trajectory – but the show is also a ready made product for the very age that it is critiquing. A salve that does not solve anything, a cultural shock absorber that allows viewers to endure the next wave of shocks. It is a program that demands viewers break away from their attachment to their black mirrors even as it encourages them to watch another episode of Black Mirror. This is not to claim that the show lacks value as a critique; however, the show is less a radical indictment than some may be tempted to give it credit for being. The discomfort people experience while watching the show easily becomes a masochistic penance that allows people to continue walking down the path to the futures outlined in the show. Black Mirror provides the antidote, but it also distills the poison.

    That, however, may be the point.

    [Interrogation 1: Who Bears Responsibility?]

    Technology is, of course, everywhere in Black Mirror – in many episodes it as much of a character as the humans who are trying to come to terms with what the particular device means. In some episodes (“The National Anthem” or “The Waldo Moment”) the technologies that feature prominently are those that would be quite familiar to contemporary viewers: social media platforms like YouTube, Twitter, Facebook and the like. Whilst in other episodes (“The Complete History of You,” “White Bear” and “Be Right Back”) the technologies on display are new and different: an implantable device that records (and can play back) all of one’s memories, something that can induce temporary amnesia, a company that has developed a being that is an impressive mix of robotics and cloning. The stories that are told in Black Mirror, as was mentioned earlier, focus largely on the tales of individuals – “Be Right Back” is primarily about one person’s grief – and though this is a powerful story-telling device (and lest there be any confusion – many of these are very powerfully told stories) one of the questions that lingers unanswered in the background of many of these episodes is: who is behind these technologies?

    In fairness, Black Mirror would likely lose some of its effectiveness in terms of impact if it were to delve deeply into this question. If “The Complete History of You” provided a sci-fi faux-documentary foray into the company that had produced the memory recording “grains” it would probably not have felt as disturbing as the tale of abuse, sex, violence and obsession that the episode actually presents. Similarly, the piece of science-fiction grade technology upon which “White Bear” relies, functions well in the episode precisely because the key device makes only a rather brief appearance. And yet here an interesting contrast emerges between the episodes set in, or closely around, the present and those that are set further down the timeline – for in the episodes that rely on platforms like YouTube, the viewer technically knows who the interests are behind the various platforms. The episode “The Complete History of You” may be intensely disturbing, but what company was it that developed and brought the “grains” to market? What biotechnology firm supplies the grieving spouse in “Be Right Back” with the robotic/clone of her deceased husband? Who gathers the information from these devices? Where does that information live? Who is profiting? These are important questions that go unanswered, largely because they go unasked.

    Of course, it can be simple to disregard these questions. Dwelling upon them certainly does take something away from the individual episodes and such focus diminishes the entertainment quality of Black Mirror. This is fundamentally why it is so essential to insist that these critical questions be asked. The worlds depicted in episodes of Black Mirror did not “just happen” but are instead a result of layers upon layers of decisions and choices that have wound up shaping these characters lives – and it is questionable how much say any of these characters had in these decisions. This is shown in stark relief in “The National Anthem” in which a befuddled prime minister cannot come to grips with the way that a threat uploaded to YouTube along with shifts in public opinion, as reflected on Twitter, has come to require him to commit a grotesque act; his despair at what he is being compelled to do is a reflection of the new world of politics created by social media. In some ways it is tempting to treat episodes like “The Complete History of You” and “Be Right Back” as retorts to an unflagging adoration for “innovation,” “disruption,” and “permissionless innovation” – for the episodes can be read as a warning that just because we can record and remember everything, does not necessarily mean that we should. And yet the presence of such a cultural warning does not mean that such devices will not eventually be brought to market. The denizens of the worlds of Black Mirror are depicted as being at the mercy of the technological current.

    Thus, and here is where the problem truly emerges, the episodes can be treated as simple warnings that state “well, don’t be like this person.” After all, the world of “The Complete History of You” seems to be filled with people who – unlike the obsessive main character – can use the “grain” productively; on a similar note it can be easy to imagine many people pointing to “Be Right Back” and saying that the idea of a robotic/clone could be wonderful – just don’t use it to replicate the recently dead; and of course any criticism of social media in “The Waldo Moment” or “The National Anthem” can be met with a retort regarding a blossoming of free expression and the ways in which such platforms can help bolster new protest movements. And yet, similar to the sad protagonist in the film Her, the characters in the story lines of Black Mirror rarely appear as active agents in relation to technology even when they are depicted as truly “choosing” a given device. Rather they have simply been reduced to consumers – whether they are consumers of social media, political campaigns, or an amusement park where the “show” is a person being psychologically tortured day after day.

    This is not to claim that there should be an Apple or Google logo prominently displayed on the “grain” or on the side of the stationary bikes in “Fifteen Million Merits,” nor is it to argue that the people behind these devices should be depicted as cackling corporate monsters – but it would be helpful to have at least some image of the people behind these devices. After all, there are people behind these devices. What were they thinking? Were they not aware of these potential risks? Did they not care? Who bears responsibility? In focusing on the small scale human stories Black Mirror ignores the fact that there is another all too human story behind all of these technologies. Thus what the program riskily replicates is a sort of technological determinism that seems to have nestled itself into the way that people talk about technology these days – a sentiment in which people have no choice but to accept (and buy) what technology firms are selling them. It is not so much, to borrow a line from Star Trek, that “resistance is futile” as that nobody seems to have even considered resistance to be an option in the first place. Granted, we have seen in the not too distant past that such a sentiment is simply not true – Google Glass was once presented as inevitable but public push-back helped lead to Google (at least temporarily) shelving the device. Alas, one of the most effective ways of convincing people that they are powerless to resist is by bludgeoning them with cultural products that tell them they are powerless to resist. Or better yet, convince them that they will actually like being “assimilated.”

    Therefore, the key thing to mull over after watching an episode of Black Mirror is not what is presented in the episode but what has been left out. Viewers need to ask the questions the show does not present: who is behind these technologies? What decisions have led to the societal acceptance of these technologies? Did anybody offer resistance to these new technologies? The “6 Questions to Ask of New Technology” posed by media theorist Neil Postman may be of use for these purposes, as might some of the questions posed in Riddled With Questions. The emphasis here is to point out that a danger of Black Mirror is that the viewer winds up being just like one of the characters : a person who simply accepts the technologically wrought world in which they are living without questioning those responsible and without thinking that opposition is possible.

    [Interrogation 2: Utopia Unhinged is not a Dystopia]

    “Dystopia” is a term that has become a fairly prominent feature in popular entertainment today. Bookshelves are filled with tales of doomed futures and many of these titles (particularly those aimed at the “young adult” audience) have a tendency to eventually reach the screens of the cinema. Of course, apocalyptic visions of the future are not limited to the big screen – as numerous television programs attest. For many, it is tempting to use terms such as “dystopia” when discussing the futures portrayed in Black Mirror and yet the usage of such a term seems rather misleading. True, at least one episode (“Fifteen Million Merits”) is clearly meant to evoke a dystopian far future, but to use that term in relation to many of the other installments seems a bit hyperbolic. After all, “The Waldo Moment” could be set tomorrow and frankly “The National Anthem” could have been set yesterday. To say that Black Mirror is a dystopian show risks taking an overly simplistic stance towards technology in the present as well as towards technology in the future – if the claim is that the show is thoroughly dystopian than how does one account for the episodes that may as well be set in the present? One can argue that the state of the present world is far less than ideal, one can cast a withering gaze in the direction of social media, one can truly believe that the current trajectory (if not altered) will lead in a negative direction…and yet one can believe all of these things and still resist the urge to label contemporary society a dystopia. Doom saying can be an enjoyably nihilistic way to pass an afternoon, but it makes for a rather poor critique.

    It may be that what Black Mirror shows is how a dystopia can actually be a private hell instead of a societal one (which would certainly seem true of “White Bear” or “The Complete History of You”), or perhaps what Black Mirror indicates is that a derailed utopia is not automatically a dystopia. Granted, a major criticism of Black Mirror could emphasize that the show has a decidedly “industrialized world/Western world” focus – we do not see the factories where “grains” are manufactured and the varieties of new smart phones seen in the program suggest that the e-waste must be piling up somewhere. In other words – the derailed utopia of some could still be an outright dystopia for countless others. That the characters in Black Mirror do not seem particularly concerned with who assembled their devices is, alas, a feature all too characteristic of technology users today. Nevertheless, to restate the problem, the issue is not so much the threat of dystopia as it is the continued failure of humanity to use its impressive technological ingenuity to bring about a utopia (or even something “better” than the present). In some ways this provides an echo of Lewis Mumford’s comment, in The Story of Utopias, that:

    “it would be so easy, this business of making over the world if it were only a matter of creating machinery.” (Mumford, 175)

    True, the worlds of Black Mirror, including the ones depicting the world of today, show that “creating machinery” actually is an easy way “of making over the world” – however this does not automatically push things in the utopian direction for which Mumford was pining. Instead what is on display is another installment of the deferred potential of technology.

    The term “another” is not used incidentally here, but is specifically meant to point to the fact that it is nothing new for people to see technology as a source for hope…and then to woefully recognize the way in which such hopes have been dashed time and again. Such a sentiment is visible in much of Walter Benjamin’s writing about technology – writing, as he was, after the mechanized destruction of WWI and on the eve of the technologically enhanced barbarity of WWII. In Benjamin’s essay “Eduard Fuchs, Collector and Historian ” he criticizes a strain in positivist/social democratic thinking that had emphasized that technological developments would automatically usher in a more just world, when in fact such attitudes woefully failed to appreciate the scale of the dangers. This leads Benjamin to note:

    “A prognosis was due, but failed to materialize. That failure sealed a process characteristic of the past century: the bungled reception of technology. The process has consisted of a series of energetic, constantly renewed efforts, all attempting to overcome the fact that technology serves this society only by producing commodities.” (Benjamin, 266)

    The century about which Benjamin was writing was not the twenty-first century, and yet these comments about “the bungled reception of technology” and technology which “serves this society only be producing commodities” seems a rather accurate description of the worlds depicted by Black Mirror. And yes, that certainly includes the episodes that are closer to our own day. The point of pulling out this tension; however, is to emphasize not the dystopian element of Black Mirror but to point to the “bungled reception” that is so clearly on display in the program – and by extension in the present day.

    What Black Mirror shows in episode after episode (even in the clearly dystopian one) is the gloomy juxtaposition between what humanity can possibly achieve and what it actually achieves. The tools that could widen democratic participation can be used to allow a cartoon bear to run as a stunt candidate, the devices that allow us to remember the past can ruin the present by keeping us constantly replaying our memories yesterday, the things that can allow us to connect can make it so that we are unable to ever let go – “energetic, constantly renewed efforts” that all wind up simply “producing commodities.” Indeed, in a tragic-comic turn, Black Mirror demonstrates that amongst the commodities we continue to produce are those that elevate the “bungled reception of technology” to the level of a widely watched and critically lauded television serial.

    The future depicted by Black Mirror may be startling, disheartening and quite depressing, but (except in the cases where the content is explicitly dystopian) it is worth bearing in mind that there is an important difference between dystopia and a world of people living amidst the continued “bungled reception of technology.” Are the people in “The National Anthem” paving the way for “White Bear” and in turn setting the stage for “Fifteen Million Merits?” It is quite possible. But this does not mean that the “reception of technology” must always be “bungled” – though changing our reception of it may require altering our attitude towards it. Here Black Mirror repeats its problematic thrust, for it does not highlight resistance but emphasizes the very attitudes that have “bungled” the reception and which continue to bungle the reception. Though “Fifteen Million Merits” does feature a character engaging in a brave act of rebellion, this act is immediately used to strengthen the very forces against which the character is rebelling – and thus the episode repeats the refrain “don’t bother resisting, it’s too late anyways.” This is not to suggest that one should focus all one’s hopes upon a farfetched utopian notion, or put faith in a sense of “hope” that is not linked to reality, nor does it mean that one should don sackcloth and begin mourning. Dystopias are cheap these days, but so are the fake utopian dreams that promise a world in which somehow technology will solve all of our problems. And yet, it is worth bearing in mind another comment from Mumford regarding the possibility of utopia:

    “we cannot ignore our utopias. They exist in the same way that north and south exist; if we are not familiar with their classical statements we at least know them as they spring to life each day in our minds. We can never reach the points of the compass; and so no doubt we shall never live in utopia; but without the magnetic needle we should not be able to travel intelligently at all.” (Mumford, 28/29)

    Black Mirror provides a stark portrait of the fake utopian lure that can lead us to the world to which we do not want to go – a world in which the “bungled reception of technology” continues to rule – but in staring horror struck at where we do not want to go we should not forget to ask where it is that we do want to go. The worlds of Black Mirror are steps in the wrong direction – so ask yourself: what would the steps in the right direction look like?

    [Final Interrogation – Permission to Panic]

    During “The Complete History of You” several characters enjoy a dinner party in which the topic of discussion eventually turns to the benefits and drawbacks of the memory recording “grains.” Many attitudes towards the “grains” are voiced – ranging from individuals who cannot imagine doing without the “grain” to a woman who has had hers violently removed and who has managed to adjust. While “The Complete History of You” focuses on an obsessed individual who cannot cope with a world in which everything can be remembered what the dinner party demonstrates is that the same world contains many people who can handle the “grains” just fine. The failed comedian who voices the cartoon bear in “The Waldo Moment” cannot understand why people are drawn to vote for the character he voices – but this does not stop many people from voting for the animated animal. Perhaps most disturbingly the woman at the center of “White Bear” cannot understand why she is followed by crowds filming her on their smart phones while she is hunted by masked assailants – but this does not stop those filming her from playing an active role in her torture. And so on…and so on…Black Mirror shows that in these horrific worlds, there are many people who are quite content with the new status quo. But that not everybody is despairing simply attests to Theodor Adorno and Max Horkheimer’s observation that:

    “A happy life in a world of horror is ignominiously refuted by the mere existence of that world. The latter therefore becomes the essence, the former negligible.” (Adorno and Horkheimer, 93)

    Black Mirror is a complex program, made all the more difficult to consider as the anthology character of the show makes each episode quite different in terms of the issues that it dwells upon. The attitudes towards technology and society that are subtly suggested in the various episodes are in line with the despairing aura that surrounds the various protagonists and antagonists of the episodes. Yet, insofar as Black Mirror advances an ethos it is one of inured acceptance – it is a satire that is both tragedy and comedy. The first episode of the program, “The National Anthem,” is an indictment of a society that cannot tear itself away from the horrors being depicted on screens in a television show that owes its success to keeping people transfixed to horrors being depicted on their screens. The show holds up a “black mirror” to society but what it shows is a world in which the tables are rigged and the audience has already lost – it is a magnificently troubling cultural product that attests to the way the culture industry can (to return to Ellul) provide the antidote even as it distills the poison. Or, to quote Adorno and Horkheimer again (swap out the word “filmgoers” with “tv viewers”):

    “The permanently hopeless situations which grind down filmgoers in daily life are transformed by their reproduction, in some unknown way, into a promise that they may continue to exist. The one needs only to become aware of one’s nullity, to subscribe to one’s own defeat, and one is already a party to it. Society is made up of the desperate and thus falls prey to rackets.” (Adorno and Horkheimer, 123)

    This is the danger of Black Mirror that it may accustom and inure its viewers to the ugly present it displays while preparing them to fall prey to the “bungled reception” of tomorrow – it inculcates the ethos of “one’s own defeat.” By showing worlds in which people are helpless to do anything much to challenge the technological society in which they have become cogs Black Mirror risks perpetuating the sense that the viewers are themselves cogs, that the viewers are themselves helpless. There is an uncomfortable kinship between the tv viewing characters of “The National Anthem” and the real world viewer of the episode “The National Anthem” – neither party can look away. Or, to put it more starkly: if you are unable to alter the future why not simply prepare yourself for it by watching more episodes of Black Mirror? At least that way you will know which characters not to imitate.

    And yet, despite these critiques, it would be unwise to fully disregard the program. It is easy to pull out comments from the likes of Ellul, Adorno, Horkheimer and Mumford that eviscerate a program such as Black Mirror but it may be more important to ask: given Black Mirror’s shortcomings, what value can the show still have? Here it is useful to recall a comment from Günther Anders (whose pessimism was on par with, or exceeded, any of the aforementioned thinkers) – he was referring in this comment to the works of Kafka, but the comment is still useful:

    “from great warnings we should be able to learn, and they should help us to teach others.” (Anders, 98)

    This is where Black Mirror can be useful, not as a series that people sit and watch, but as a piece of culture that leads people to put forth the questions that the show jumps over. At its best what Black Mirror provides is a space in which people can discuss their fears and anxieties about technology without worrying that somebody will, farcically, call them a “Luddite” for daring to have such concerns – and for this reason alone the show may be worthwhile. By highlighting the questions that go unanswered in Black Mirror we may be able to put forth the very queries that are rarely made about technology today. It is true that the reflections seen by staring into Black Mirror are dark, warped and unappealing – but such reflections are only worth something if they compel audiences to rethink their relationships to the black mirrored surfaces in their lives today and which may be in their lives tomorrow. After all, one can look into the mirror in order to see the dirt on one’s face or one can look in the mirror because of a narcissistic urge. The program certainly has the potential to provide a useful reflection, but as with the technology depicted in the show, it is all too easy for such a potential reception to be “bungled.”

    If we are spending too much time gazing at black mirrors, is the solution really to stare at Black Mirror?

    The show may be a satire, but if all people do is watch, then the joke is on the audience.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Adorno, Theodor and Horkheimer, Max. Dialectic of Enlightenment: Philosophical Fragments. Stanford: Stanford University Press, 2002.
    • Anders, Günther. Franz Kafka. New York: Hilary House Publishers LTD, 1960.
    • Benjamin, Walter. Walter Benjamin: Selected Writings. Volume 3, 1935-1938. Cambridge: The Belknap Press, 2002.
    • Ellul, Jacques. The Technological Society. New York: Vintage Books, 1964.
    • Mumford, Lewis. The Story of Utopias. Bibliobazaar, 2008.
  • "Moving Captive Bodies: Unknown Women in the New Europe" by Anita Starosta

    "Moving Captive Bodies: Unknown Women in the New Europe" by Anita Starosta

    boundary 2 presented a talk “Moving Captive Bodies: Unknown Women in the New Europe” by editor and contributor Anita Starosta at the University of Pittsburgh on April 9, 2015. Listen below:

    Captive Bodies

  • The Internet vs. Democracy

    The Internet vs. Democracy

    Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracya review of Robert W. McChesney, Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy  (The New Press, 2014)
    by Richard Hill
    ~
    Many of us have noticed that much of the news we read is the same, no matter which newspaper or web site we consult: they all seem to be recycling the same agency feeds. To understand why this is happening, there are few better analyses than the one developed by media scholar Robert McChesney in his most recent book, Digital Disconnect. McChesney is a Professor in the Department of Communication at the University of Illinois at Urbana-Champaign, specializing in the history and political economy of communications. He is the author or co-author of more than 20 books, among the best-known of which are The Endless Crisis: How Monopoly-Finance Capital Produces Stagnation and Upheaval from the USA to China (with John Bellamy Foster, 2012), The Political Economy of Media: Enduring Issues, Emerging Dilemmas (2008), Communication Revolution: Critical Junctures and the Future of Media (2007), and Rich Media, Poor Democracy: Communication Politics in Dubious Times (1999), and is co-founder of Free Press.

    Many see the internet as a powerful force for improvement of human rights, living conditions, the economy, rights of minorities, etc. And indeed, like many communications technologies, the internet has the potential to facilitate social improvements. But in reality the internet has recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities.

    One might have expected that democracies would have harnessed the internet to serve the interests of their citizens, as they largely did with other technologies such as roads, telegraphy, telephony, air transport, pharmaceuticals (even if they used these to serve only the interests of their own citizens and not the general interests of mankind).

    But this does not appear to be the case with respect to the internet: it is used largely to serve the interests of a few very wealthy individuals, or certain geo-economic and geo-political interests. As McChesney puts the matter: “It is supremely ironic that the internet, the much-ballyhooed champion of increased consumer power and cutthroat competition, has become one of the greatest generators of monopoly in economic history” (131 in the print edition). This trend to use technology to favor special interests, not the general interest, is not unique to the internet. As Josep Ramoneda puts the matter: “We expected that governments would submit markets to democracy and it turns out that what they do is adapt democracy to markets, that is, empty it little by little.”

    McChesney’s book explains why this is the case: despite its great promise and potential to increase democracy, various factors have turned the internet into a force that is actually destructive to democracy, and that favors special interests.

    McChesney reminds us what democracy is, citing Aristotle (53): “Democracy [is] when the indigent, and not the men of property are the rulers. If liberty and equality … are chiefly to be found in democracy, they will be best attained when all persons alike share in the government to the utmost.”

    He also cites US President Lincoln’s 1861 warning against despotism (55): “the effort to place capital on an equal footing with, if not above, labor in the structure of government.” According to McChesney, it was imperative for Lincoln that the wealthy not be permitted to have undue influence over the government.

    Yet what we see today in the internet is concentrated wealth in the form of large private companies that exert increasing influence over public policy matters, going to so far as to call openly for governance systems in which they have equal decision-making rights with the elected representatives of the people. Current internet governance mechanisms are celebrated as paragons of success, whereas in fact they have not been successful in achieving the social promise of the internet. And it has even been said that such systems need not be democratic.

    What sense does it make for the technology that was supposed to facilitate democracy to be governed in ways that are not democratic? It makes business sense, of course, in the sense of maximizing profits for shareholders.

    McChesney explains how profit-maximization in the excessively laissez-faire regime that is commonly called neoliberalism has resulted in increasing concentration of power and wealth, social inequality and, worse, erosion of the press, leading to erosion of democracy. Nowhere is this more clearly seen than in the US, which is the focus of McChesney’s book. Not only has the internet eroded democracy in the US, it is used by the US to further its geo-political goals; and, adding insult to injury, it is promoted as a means of furthering democracy. Of course it could and should do so, but unfortunately it does not, as McChesney explains.

    The book starts by noting the importance of the digital revolution and by summarizing the views of those who see it as an engine of good (the celebrants) versus those who point out its limitations and some of its negative effects (the skeptics). McChesney correctly notes that a proper analysis of the digital revolution must be grounded in political economy. Since the digital revolution is occurring in a capitalist system, it is necessarily conditioned by that system, and it necessarily influences that system.

    A chapter is devoted to explaining how and why capitalism does not equal democracy: on the contrary, capitalism can well erode democracy, the contemporary United States being a good example. To dig deeper into the issues, McChesney approaches the internet from the perspective of the political economy of communication. He shows how the internet has profoundly disrupted traditional media, and how, contrary to the rhetoric, it has reduced competition and choice – because the economies of scale and network effects of the new technologies inevitably favor concentration, to the point of creating natural monopolies (who is number two after Facebook? Or Twitter?).

    The book then documents how the initially non-commercial, publicly-subsidized internet was transformed into an eminently commercial, privately-owned capitalist institution, in the worst sense of “capitalist”: domination by large corporations, monopolistic markets, endless advertising, intense lobbying, and cronyism bordering on corruption.

    Having explained what happened in general, McChesney focuses on what happened to journalism and the media in particular. As we all know, it has been a disaster: nobody has yet found a viable business model for respectable online journalism. As McChesney correctly notes, vibrant journalism is a pre-condition for democracy: how can people make informed choices if they do not have access to valid information? The internet was supposed to broaden our sources of information. Sadly, it has not, for the reasons explained in detail in the book. Yet there is hope: McChesney provides concrete suggestions for how to deal with the issue, drawing on actual experiences in well functioning democracies in Europe.

    The book goes on to call for specific actions that would create a revolution in the digital revolution, bringing it back to its origins: by the people, for the people. McChesney’s proposed actions are consistent with those of certain civil society organizations, and will no doubt be taken up in the forthcoming Internet Social Forum, an initiative whose intent is precisely to revolutionize the digital revolution along the lines outlined by McChesney.

    Anybody who is aware of the many issues threatening the free and open internet, and democracy itself, will find much to reflect upon in Digital Disconnect, not just because of its well-researched and incisive analysis, but also because it provides concrete suggestions for how to address the issues.

    _____

    Richard Hill, an independent consultant based in Geneva, Switzerland, was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He frequently writes about internet governance issues for The b2 Review Digital Studies magazine.

    Back to the essay

  • "The Absence of Imagination" by Bruce Robbins

    "The Absence of Imagination" by Bruce Robbins

    boundary 2 presented a talk “The Absence of Imagination” by editor and contributor Bruce Robbins at the University of Pittsburgh on March 30, 2015.

  • The Automatic Teacher

    The Automatic Teacher

    By Audrey Watters
    ~

    “For a number of years the writer has had it in mind that a simple machine for automatic testing of intelligence or information was entirely within the realm of possibility. The modern objective test, with its definite systemization of procedure and objectivity of scoring, naturally suggests such a development. Further, even with the modern objective test the burden of scoring (with the present very extensive use of such tests) is nevertheless great enough to make insistent the need for labor-saving devices in such work” – Sidney Pressey, “A Simple Apparatus Which Gives Tests and Scores – And Teaches,” School and Society, 1926

    Ohio State University professor Sidney Pressey first displayed the prototype of his “automatic intelligence testing machine” at the 1924 American Psychological Association meeting. Two years later, he submitted a patent for the device and spent the next decade or so trying to market it (to manufacturers and investors, as well as to schools).

    It wasn’t Pressey’s first commercial move. In 1922 he and his wife Luella Cole published Introduction to the Use of Standard Tests, a “practical” and “non-technical” guide meant “as an introductory handbook in the use of tests” aimed to meet the needs of “the busy teacher, principal or superintendent.” By the mid–1920s, the two had over a dozen different proprietary standardized tests on the market, selling a couple of hundred thousand copies a year, along with some two million test blanks.

    Although standardized testing had become commonplace in the classroom by the 1920s, they were already placing a significant burden upon those teachers and clerks tasked with scoring them. Hoping to capitalize yet again on the test-taking industry, Pressey argued that automation could “free the teacher from much of the present-day drudgery of paper-grading drill, and information-fixing – should free her for real teaching of the inspirational.”

    pressey_machines

    The Automatic Teacher

    Here’s how Pressey described the machine, which he branded as the Automatic Teacher in his 1926 School and Society article:

    The apparatus is about the size of an ordinary portable typewriter – though much simpler. …The person who is using the machine finds presented to him in a little window a typewritten or mimeographed question of the ordinary selective-answer type – for instance:

    To help the poor debtors of England, James Oglethorpe founded the colony of (1) Connecticut, (2) Delaware, (3) Maryland, (4) Georgia.

    To one side of the apparatus are four keys. Suppose now that the person taking the test considers Answer 4 to be the correct answer. He then presses Key 4 and so indicates his reply to the question. The pressing of the key operates to turn up a new question, to which the subject responds in the same fashion. The apparatus counts the number of his correct responses on a little counter to the back of the machine…. All the person taking the test has to do, then, is to read each question as it appears and press a key to indicate his answer. And the labor of the person giving and scoring the test is confined simply to slipping the test sheet into the device at the beginning (this is done exactly as one slips a sheet of paper into a typewriter), and noting on the counter the total score, after the subject has finished.

    The above paragraph describes the operation of the apparatus if it is being used simply to test. If it is to be used also to teach then a little lever to the back is raised. This automatically shifts the mechanism so that a new question is not rolled up until the correct answer to the question to which the subject is responding is found. However, the counter counts all tries.

    It should be emphasized that, for most purposes, this second set is by all odds the most valuable and interesting. With this second set the device is exceptionally valuable for testing, since it is possible for the subject to make more than one mistake on a question – a feature which is, so far as the writer knows, entirely unique and which appears decidedly to increase the significance of the score. However, in the way in which it functions at the same time as an ‘automatic teacher’ the device is still more unusual. It tells the subject at once when he makes a mistake (there is no waiting several days, until a corrected paper is returned, before he knows where he is right and where wrong). It keeps each question on which he makes an error before him until he finds the right answer; he must get the correct answer to each question before he can go on to the next. When he does give the right answer, the apparatus informs him immediately to that effect. If he runs the material through the little machine again, it measures for him his progress in mastery of the topics dealt with. In short the apparatus provides in very interesting ways for efficient learning.

    A video from 1964 shows Pressey demonstrating his “teaching machine,” including the “reward dial” feature that could be set to dispense a candy once a certain number of correct answers were given:

    [youtube https://www.youtube.com/watch?v=n7OfEXWuulg?rel=0]

    Market Failure

    UBC’s Stephen Petrina documents the commercial failure of the Automatic Teacher in his 2004 article “Sidney Pressey and the Automation of Education, 1924–1934.” According to Petrina, Pressey started looking for investors for his machine in December 1925, “first among publishers and manufacturers of typewriters, adding machines, and mimeo- graph machines, and later, in the spring of 1926, extending his search to scientific instrument makers.” He approached at least six Midwestern manufacturers in 1926, but no one was interested.

    In 1929, Pressey finally signed a contract with the W. M. Welch Manufacturing Company, a Chicago-based company that produced scientific instruments.

    Petrina writes that,

    After so many disappointments, Pressey was impatient: he offered to forgo royalties on two hundred machines if Welch could keep the price per copy at five dollars, and he himself submitted an order for thirty machines to be used in a summer course he taught school administrators. A few months later he offered to put up twelve hundred dollars to cover tooling costs. Medard W. Welch, sales manager of Welch Manufacturing, however, advised a “slower, more conservative approach.” Fifteen dollars per machine was a more realistic price, he thought, and he offered to refund Pressey fifteen dollars per machine sold until Pressey recouped his twelve-hundred-dollar investment. Drawing on nearly fifty years experience selling to schools, Welch was reluctant to rush into any project that depended on classroom reforms. He preferred to send out circulars advertising the Automatic Teacher, solicit orders, and then proceed with production if a demand materialized.

    ad_pressey

    The demand never really materialized, and even if it had, the manufacturing process – getting the device to market – was plagued with problems, caused in part by Pressey’s constant demands to redefine and retool the machines.

    The stress from the development of the Automatic Teacher took an enormous toll on Pressey’s health, and he had a breakdown in late 1929. (He was still teaching, supervising courses, and advising graduate students at Ohio State University.)

    The devices did finally ship in April 1930. But that original sales price was cost-prohibitive. $15 was, as Petrina notes, “more than half the annual cost ($29.27) of educating a student in the United States in 1930.” Welch could not sell the machines and ceased production with 69 of the original run of 250 devices still in stock.

    Pressey admitted defeat. In a 1932 School and Society article, he wrote “The writer is regretfully dropping further work on these problems. But he hopes that enough has been done to stimulate other workers.”

    But Pressey didn’t really abandon the teaching machine. He continued to present on his research at APA meetings. But he did write in a 1964 article “Teaching Machines (And Learning Theory) Crisis” that “Much seems very wrong about current attempts at auto-instruction.”

    Indeed.

    Automation and Individualization

    In his article “Toward the Coming ‘Industrial Revolution’ in Education (1932), Pressey wrote that

    “Education is the one major activity in this country which is still in a crude handicraft stage. But the economic depression may here work beneficially, in that it may force the consideration of efficiency and the need for laborsaving devices in education. Education is a large-scale industry; it should use quantity production methods. This does not mean, in any unfortunate sense, the mechanization of education. It does mean freeing the teacher from the drudgeries of her work so that she may do more real teaching, giving the pupil more adequate guidance in his learning. There may well be an ‘industrial revolution’ in education. The ultimate results should be highly beneficial. Perhaps only by such means can universal education be made effective.”

    Pressey intended for his automated teaching and testing machines to individualize education. It’s an argument that’s made about teaching machines today too. These devices will allow students to move at their own pace through the curriculum. They will free up teachers’ time to work more closely with individual students.

    But as Pretina argues, “the effect of automation was control and standardization.”

    The Automatic Teacher was a technology of normalization, but it was at the same time a product of liberality. The Automatic Teacher provided for self- instruction and self-regulated, therapeutic treatment. It was designed to provide the right kind and amount of treatment for individual, scholastic deficiencies; thus, it was individualizing. Pressey articulated this liberal rationale during the 1920s and 1930s, and again in the 1950s and 1960s. Although intended as an act of freedom, the self-instruction provided by an Automatic Teacher also habituated learners to the authoritative norms underwriting self-regulation and self-governance. They not only learned to think in and about school subjects (arithmetic, geography, history), but also how to discipline themselves within this imposed structure. They were regulated not only through the knowledge and power embedded in the school subjects but also through the self-governance of their moral conduct. Both knowledge and personality were normalized in the minutiae of individualization and in the machinations of mass education. Freedom from the confines of mass education proved to be a contradictory project and, if Pressey’s case is representative, one more easily automated than commercialized.

    The massive influx of venture capital into today’s teaching machines, of course, would like to see otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • Something About the Digital

    Something About the Digital

    By Alexander R. Galloway
    ~

    (This catalog essay was written in 2011 for the exhibition “Chaos as Usual,” curated by Hanne Mugaas at the Bergen Kunsthall in Norway. Artists in the exhibition included Philip Kwame Apagya, Ann Craven, Liz Deschenes, Thomas Julier [in collaboration with Cédric Eisenring and Kaspar Mueller], Olia Lialina and Dragan Espenschied, Takeshi Murata, Seth Price, and Antek Walczak.)

    There is something about the digital. Most people aren’t quite sure what it is. Or what they feel about it. But something.

    In 2001 Lev Manovich said it was a language. For Steven Shaviro, the issue is being connected. Others talk about “cyber” this and “cyber” that. Is the Internet about the search (John Battelle)? Or is it rather, even more primordially, about the information (James Gleick)? Whatever it is, something is afoot.

    What is this something? Given the times in which we live, it is ironic that this term is so rarely defined and even more rarely defined correctly. But the definition is simple: the digital means the one divides into two.

    Digital doesn’t mean machine. It doesn’t mean virtual reality. It doesn’t even mean the computer – there are analog computers after all, like grandfather clocks or slide rules. Digital means the digits: the fingers and toes. And since most of us have a discrete number of fingers and toes, the digital has come to mean, by extension, any mode of representation rooted in individually separate and distinct units. So the natural numbers (1, 2, 3, …) are aptly labeled “digital” because they are separate and distinct, but the arc of a bird in flight is not because it is smooth and continuous. A reel of celluloid film is correctly called “digital” because it contains distinct breaks between each frame, but the photographic frames themselves are not because they record continuously variable chromatic intensities.

    We must stop believing the myth, then, about the digital future versus the analog past. For the digital died its first death in the continuous calculus of Newton and Leibniz, and the curvilinear revolution of the Baroque that came with it. And the digital has suffered a thousand blows since, from the swirling vortexes of nineteenth-century thermodynamics, to the chaos theory of recent decades. The switch from analog computing to digital computing in the middle twentieth century is but a single battle in the multi-millennial skirmish within western culture between the unary and the binary, proportion and distinction, curves and jumps, integration and division – in short, over when and how the one divides into two.

    What would it mean to say that a work of art divides into two? Or to put it another way, what would art look like if it began to meditate on the one dividing into two? I think this is the only way we can truly begin to think about “digital art.” And because of this we shall leave Photoshop, and iMovie, and the Internet and all the digital tools behind us, because interrogating them will not nearly begin to address these questions. Instead look to Ann Craven’s paintings. Or look to the delightful conversation sparked here between Philip Kwame Apagya and Liz Deschenes. Or look to the work of Thomas Julier, even to a piece of his not included in the show, “Architecture Reflecting in Architecture” (2010, made with Cedric Eisenring), which depicts a rectilinear cityscape reflected inside the mirror skins of skyscrapers, just like Saul Bass’s famous title sequence in North By Northwest (1959).

    DSC_0002__560
    Liz Deschenes, “Green Screen 4” (2001)

    All of these works deal with the question of twoness. But it is twoness only in a very particular sense. This is not the twoness of the doppelganger of the romantic period, or the twoness of the “split mind” of the schizophrenic, and neither is it the twoness of the self/other distinction that so forcefully animated culture and philosophy during the twentieth century, particularly in cultural anthropology and then later in poststructuralism. Rather we see here a twoness of the material, a digitization at the level of the aesthetic regime itself.

    Consider the call and response heard across the works featured here by Apagya and Deschenes. At the most superficial level, one might observe that these are works about superimposition, about compositing. Apagya’s photographs exploit one of the oldest and most useful tricks of picture making: superimpose one layer on top of another layer in order to produce a picture. Painters do this all the time of course, and very early on it became a mainstay of photographic technique (even if it often remained relegated to mere “trick” photography), evident in photomontage, spirit photography, and even the side-by-side compositing techniques of the carte de visite popularized by André-Adolphe-Eugène Disdéri in the 1850s. Recall too that the cinema has made productive use of superimposition, adopting the technique with great facility from the theater and its painted scrims and moving backdrops. (Perhaps the best illustration of this comes at the end of A Night at the Opera [1935], when Harpo Marx goes on a lunatic rampage through the flyloft during the opera’s performance, raising and lowering painted backdrops to great comic effect.) So the more “modern” cinematic techniques of, first, rear screen projection, and then later chromakey (known commonly as the “green screen” or “blue screen” effect), are but a reiteration of the much longer legacy of compositing in image making.

    Deschenes’ “Green Screen #4” points to this broad aesthetic history, as it empties out the content of the image, forcing us to acknowledge the suppressed color itself – in this case green, but any color will work. Hence Deschenes gives us nothing but a pure background, a pure something.

    Allowed to curve gracefully off the wall onto the floor, the green color field resembles the “sweep wall” used commonly in portraiture or fashion photography whenever an artist wishes to erase the lines and shadows of the studio environment. “Green Screen #4” is thus the antithesis of what has remained for many years the signal art work about video chromakey, Peter Campus’ “Three Transitions” (1973). Whereas Campus attempted to draw attention to the visual and spatial paradoxes made possible by chromakey, and even in so doing was forced to hide the effect inside the jittery gaps between images, Deschenes by contrast feels no such anxiety, presenting us with the medium itself, minus any “content” necessary to fuel it, minus the powerful mise en abyme of the Campus video, and so too minus Campus’ mirthless autobiographical staging. If Campus ultimately resolves the relationship between images through a version of montage, Deschenes offers something more like a “divorced digitality” in which no two images are brought into relation at all, only the minimal substrate remains, without input or output.

    The sweep wall is evident too in Apagya’s images, only of a different sort, as the artifice of the various backgrounds – in a nod not so much to fantasy as to kitsch – both fuses with and separates from the foreground subject. Yet what might ultimately unite the works by Apagya and Deschenes is not so much the compositing technique, but a more general reference, albeit oblique but nevertheless crucial, to the fact that such techniques are today entirely quotidian, entirely usual. These are everyday folk techniques through and through. One needs only a web cam and simple software to perform chromakey compositing on a computer, just as one might go to the county fair and have one’s portrait superimposed on the body of a cartoon character.

    What I’m trying to stress here is that there is nothing particularly “technological” about digitality. All that is required is a division from one to two – and by extension from two to three and beyond to the multiple. This is why I see layering as so important, for it spotlights an internal separation within the image. Apagya’s settings are digital, therefore, simply by virtue of the fact that he addresses our eye toward two incompatible aesthetic zones existing within the image. The artifice of a painted backdrop, and the pose of a person in a portrait.

    Certainly the digital computer is “digital” by virtue of being binary, which is to say by virtue of encoding and processing numbers at the lowest levels using base-two mathematics. But that is only the most prosaic and obvious exhibit of its digitality. For the computer is “digital” too in its atomization of the universe, into, for example, a million Facebook profiles, all equally separate and discrete. Or likewise “digital” too in the computer interface itself which splits things irretrievably into cursor and content, window and file, or even, as we see commonly in video games, into heads-up-display and playable world. The one divides into two.

    So when clusters of repetition appear across Ann Craven’s paintings, or the iterative layers of the “copy” of the “reconstruction” in the video here by Thomas Julier and Cédric Eisenring, or the accumulations of images that proliferate in Olia Lialina and Dragon Espenschied’s “Comparative History of Classic Animated GIFs and Glitter Graphics” [2007] (a small snapshot of what they have assembled in their spectacular book from 2009 titled Digital Folklore), or elsewhere in works like Oliver Laric’s clipart videos (“787 Cliparts” [2006] and “2000 Cliparts” [2010]), we should not simply recall the famous meditations on copies and repetitions, from Walter Benjamin in 1936 to Gilles Deleuze in 1968, but also a larger backdrop that evokes the very cleavages emanating from western metaphysics itself from Plato onward. For this same metaphysics of division is always already a digital metaphysics as it forever differentiates between subject and object, Being and being, essence and instance, or original and repetition. It shouldn’t come as a surprise that we see here such vivid aesthetic meditations on that same cleavage, whether or not a computer was involved.

    Another perspective on the same question would be to think about appropriation. There is a common way of talking about Internet art that goes roughly as follows: the beginning of net art in the middle to late 1990s was mostly “modernist” in that it tended to reflect back on the possibilities of the new medium, building an aesthetic from the material affordances of code, screen, browser, and jpeg, just as modernists in painting or literature built their own aesthetic style from a reflection on the specific affordances of line, color, tone, or timbre; whereas the second phase of net art, coinciding with “Web 2.0” technologies like blogging and video sharing sites, is altogether more “postmodern” in that it tends to co-opt existing material into recombinant appropriations and remixes. If something like the “WebStalker” web browser or the Jodi.org homepage are emblematic of the first period, then John Michael Boling’s “Guitar Solo Threeway,” Brody Condon’s “Without Sun,” or the Nasty Nets web surfing club, now sadly defunct, are emblematic of the second period.

    I’m not entirely unsatisfied by such a periodization, even if it tends to confuse as many things as it clarifies – not entirely unsatisfied because it indicates that appropriation too is a technique of digitality. As Martin Heidegger signals, by way of his notoriously enigmatic concept Ereignis, western thought and culture was always a process in which a proper relationship of belonging is established in a world, and so too appropriation establishes new relationships of belonging between objects and their contexts, between artists and materials, and between viewers and works of art. (Such is the definition of appropriation after all: to establish a belonging.) This is what I mean when I say that appropriation is a technique of digitality: it calls out a distinction in the object from “where it was prior” to “where it is now,” simply by removing that object from one context of belonging and separating it out into another. That these two contexts are merely different – that something has changed – is evidence enough of the digitality of appropriation. Even when the act of appropriation does not reduplicate the object or rely on multiple sources, as with the artistic ready-made, it still inaugurates a “twoness” in the appropriated object, an asterisk appended to the art work denoting that something is different.

    TMu_Cyborg_2011_18-1024x682
    Takeshi Murata, “Cyborg” (2011)

    Perhaps this is why Takeshi Murata continues his exploration of the multiplicities at the core of digital aesthetics by returning to that age old format, the still life. Is not the still life itself a kind of appropriation, in that it brings together various objects into a relationship of belonging: fig and fowl in the Dutch masters, or here the various detritus of contemporary cyber culture, from cult films to iPhones?

    Because appropriation brings things together it must grapple with a fundamental question. Whatever is brought together must form a relation. These various things must sit side-by-side with each other. Hence one might speak of any grouping of objects in terms of their “parallel” nature, that is to say, in terms of the way in which they maintain their multiple identities in parallel.

    But let us dwell for a moment longer on these agglomerations of things, and in particular their “parallel” composition. By parallel I mean the way in which digital media tend to segregate and divide art into multiple, separate channels. These parallel channels may be quite manifest, as in the separate video feeds that make up the aforementioned “Guitar Solo Threeway,” or they may issue from the lowest levels of the medium, as when video compression codecs divide the moving image into small blocks of pixels that move and morph semi-autonomously within the frame. In fact I have found it useful to speak of this in terms of the “parallel image” in order to differentiate today’s media making from that of a century ago, which Friedrich Kittler and others have chosen to label “serial” after the serial sequences of the film strip, or the rat-ta-tat-tat of a typewriter.

    Thus films like Tatjana Marusic’s “The Memory of a Landscape” (2004) or Takeshi Murata’s “Monster Movie” (2005) are genuinely digital films, for they show parallelity in inscription. Each individual block in the video compression scheme has its own autonomy and is able to write to the screen in parallel with all the other blocks. These are quite literally, then, “multichannel” videos – we might even take a cue from online gaming circles and label them “massively multichannel” videos. They are multichannel not because they require multiple monitors, but because each individual block or “channel” within the image acts as an individual micro video feed. Each color block is its own channel. Thus, the video compression scheme illustrates, through metonymy, how pixel images work in general, and, as I suggest, it also illustrates the larger currents of digitality, for it shows that these images, in order to create “an” image must first proliferate the division of sub-images, which themselves ultimately coalesce into something resembling a whole. In other words, in order to create a “one” they must first bifurcate the single image source into two or more separate images.

    The digital image is thus a cellular and discrete image, consisting of separate channels multiplexed in tandem or triplicate or, greater, into nine, twelve, twenty-four, one hundred, or indeed into a massively parallel image of a virtually infinite visuality.

    For me this generates a more appealing explanation for why art and culture has, over the last several decades, developed a growing anxiety over copies, repetitions, simulations, appropriations, reenactments – you name it. It is common to attribute such anxiety to a generalized disenchantment permeating modern life: our culture has lost its aura and can no longer discern an original from a copy due to endless proliferations of simulation. Such an assessment is only partially correct. I say only partially because I am skeptical of the romantic nostalgia that often fuels such pronouncements. For who can demonstrate with certainty that the past carried with it a greater sense of aesthetic integrity, a greater unity in art? Yet the assessment begins to adopt a modicum of sense if we consider it from a different point of view, from the perspective of a generalized digitality. For if we define the digital as “the one dividing into two,” then it would be fitting to witness works of art that proliferate these same dualities and multiplicities. In other words, even if there was a “pure” aesthetic origin it was a digital origin to begin with. And thus one needn’t fret over it having infected our so-called contemporary sensibilities.

    Instead it is important not to be blinded by the technology. But rather to determine that, within a generalized digitality, there must be some kind of differential at play. There must be something different, and without such a differential it is impossible to say that something is something (rather than something else, or indeed rather than nothing). The one must divide into something else. Nothing less and nothing more is required, only a generic difference. And this is our first insight into the “something” of the digital.

    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Chinese Privilege, Gender and Intersectionality in Singapore: A Conversation between Adeline Koh and Sangeetha Thanapal

    Chinese Privilege, Gender and Intersectionality in Singapore: A Conversation between Adeline Koh and Sangeetha Thanapal

     

    Edited by Petra Dierkes-Thrun
    Introduction (by Adeline Koh)

    ~

    Singapore, a tiny Southeast Asian nation-state, is well known for its impressive economic growth since its independence in 1965. Filled with towering skyscrapers, an impressive, well-maintained public transport system and an unemployment rate the envy of most industrialized nations, the small country is often referenced as a model postcolonial state.

    Despite these impressive economic strides, many of the racial tensions that have their roots in Singapore’s colonial history continue to manifest today, especially in relation to gender. Formerly a British colony, Singapore boasts a multi-racial, multi-ethnic population, most of which are classified into four major groups by the state: Chinese, Malay, Indian and ‘Other’. Unlike Singapore’s neighboring countries Malaysia and Indonesia, Singapore’s ethnic Chinese population is the majority ethnic group. These four categories are also constantly being challenged and nuanced by the high level of foreigners who are employed and study in Singapore. Constructions of ethnicities are highly inflected by gender roles in the four major ethnic groups and nuanced by the constant influx of migrants in the country, which include mainland Chinese ‘study mamas’ (mothers accompanying their young children to study in Singapore), female domestic workers from the Philippines and Indonesia, and male construction workers from China, India and Bangladesh.

    Singapore’s ethnic Chinese population enjoys the most economic wealth and social status in this small country, which manifests itself in political and material privilege. Despite the fact that there are four officially sanctioned state languages (English, Mandarin Chinese, Malay and Tamil), television screens on public transport often broadcast shows only in English or Mandarin; increasingly, customer service representatives will be fluent in Mandarin but not the other two official languages; there are multiple reports of taxi drivers refusing to answer calls in areas where there are often more minority people. National beauty pageants also tend to celebrate a Chinese ideal of feminine beauty, as opposed to other ethnicities, so that it becomes exceedingly rare for a minority to win these competitions.

    Scholarly work on race and ethnicity in Singapore seldom discusses this inflection of racial privilege with gender, an extremely important intersection that nuances the structure of minority identity in the country. In this interview, I speak with Sangeetha Thanapal, an Indian Singaporean woman who first introduced the controversial concept of ‘Chinese privilege’ in Singapore. Thanapal holds that structural ethnic Singaporean Chinese’s racial privilege is in some ways analogous to White privilege in Europe, the United States, Australia and New Zealand, despite the important differences in the historical, social, political, and geographic circumstances and developments of these two privileges. Thanapal’s provocative work and the virulent responses it engendered (mainly by Singaporean Chinese), inspired me to write a Medium essay titled ‘To My Dear Fellow Singapore Chinese: Shut Up When A Minority Is Talking About Race’ (which has since garnered over 105,000 page views and 56 recommends). We are now collaborating on a Medium Collection on Chinese Privilege, which seeks to bring to light the stories of minority voices in Singapore.

    Chinese privilege in Singapore is unique because it occurs outside of mainland China and territories which it has historically controlled. In this manner, our interview is intended as the beginning of an examination of a larger Chinese privilege, with its own histories of colonialism and migratory communities. We note that in order to zero in on the current racial and political structures in Singapore, as well as specifically on the complex role of gender, our interview does not focus on the historical development of this privilege per se, or on the obviously important, historically motivated distinctions between different groups of Chinese in Singapore. In the nineteenth century, under British colonialism, southern Chinese immigrated from China to Singapore and Malaysia to escape famine and the effects of the Opium Wars back home, and arrived to a colony in which they were brutally subjugated: the majority of male Chinese immigrants experienced great abuse under a system of indentured labor (the “coolie” system), and many of the (comparatively few) female immigrants were forced into prostitution. While this interview is intended to open up a conversation about monolithic Singaporean Chinese privilege today, we plan a more comprehensive critical historical genealogy of comparative Chinese privilege in our future work in order to elaborate upon these distinctions and developments.

    Furthermore, future work should pursue two additional important lines of inquiry: first, a clear conceptual delineation between Chinese-speaking and English-speaking Singaporeans and the different sorts of privileges which they encounter; and second, a comparison between the historical forces driving the subalternity of the indigenous Malays, and that of the diasporic Indian population. Like the Chinese, many contemporary Indian Singaporeans arrived in the colony as indentured labor, as well as convicts, traders and as sepoys under the British military. Which historical and material conditions allowed the Chinese to appropriate the forms of privilege they enjoy in Singapore today, while Indians could not join or rival them in this privilege in their own Singaporean experience? Further, we want to investigate the sorts of cultural imaginaries that are used in the creation of Singaporean Chinese privilege and its connection with reinventions of mainland Chinese chauvinism (such as in the Chinese term for China, Zhong Guo, meaning Middle Kingdom, center of the world between heaven and hell). We also want to continue building on this concept of Chinese privilege through a simultaneous examination of Tamil-Hindu internal prejudices of the Indian community in Singapore, as well as its relationship with the Malay community.

    In many respects, then, this interview is simply a first step towards a larger, sorely needed conversation about race, gender, and privilege in Singapore. We hope it will inspire others to build on our suggested research trajectories and also develop new ones of their own.

    *******

    Adeline Koh: Sangeetha, thanks so much for speaking with me today. To begin with, could you tell me about your experience being an Indian Singaporean?

    Sangeetha Thanapal: To be Indian Singaporean is to carry a number of identities, not all of whom work in concert with each other. We are expected to keep in touch with our root culture, language and traditions, but never to engage in any kind of ethnic chauvinism. We are expected to be bilingual cosmopolitan citizens of the world, while constantly being grounded in Indian culture. Those who manage to do this effectively are invariably performing a form of code switching between their traditional Indian language-speaking identities and their English-speaking, modern ones. We are told we have to be firmly established in our cultures, but people who follow this advice are seen as provincial. To speak your mother tongue well is to invite questions about how long ago you immigrated from India. It is this tension that we have to constantly negotiate, and many of us cannot or refuse to do so. To be Indian is to have my ethnicity matter in all things, but to be Singaporean is not have it matter at all, supposedly. It is ironic and–given the inability of the state to adequately marry these two binaries–unsurprising that race and ethnicity are difficult concepts to examine and contend with in Singapore.

    AK: Could you elaborate on this?

    ST: The racialism paradox in Singapore makes race front and center of your identity, while at the same time denying that race has anything to do with the obvious differences in people’s treatment. One example is the Singaporean Identity Card, which states your ethnicity.1 This identity card is akin to a Social Security number in the United States (used to apply for housing, bank loans, even something as simple as a phone number), and hence including this information makes someone’s racial identity a dominant factor. It is not hard to imagine the many ways in which this can disadvantage minorities. Even job applications ask for your ethnicity, a practice that is illegal in many countries. Educational achievements are viewed through the lens of race, not gender or class.2 Why does the state constantly racialize us and pit us against one another? This also obfuscates the intertwinement of race and class. For instance, the state says that Malays are underperforming3 in academics, leaving out their constant marginalization leading to such class factors. The Singaporean pledge literally says, ‘regardless of race, language or religion,’ implying that meritocracy trumps race in this alleged land of opportunity. Supposedly, hard work comes with the same opportunities for all. The government has a governance principle: ‘Work for reward, Reward for work.’4 Meritocracy is a neoliberal lie that tends to ignore the systemic inequalities that have strong material effects on people’s ability to live and work in Singapore. It places the blame for failure on those who did not work hard enough or take full advantage of the choices they had, conveniently forgetting that some people did not have a diverse range of choices to begin with.

    AK: It almost seems as though minority Singaporeans have to adopt what W.E.B. Dubois called a ‘double consciousness’–always having to think in terms of the language and social of the dominant group while maintaining their own cultural space. What do you think?

    ST: When Dubois speaks of double consciousness, he is referring to people of colour’s, specifically Black people’s, constant negotiation of conflicting racial identities, often a result of racial oppression. In The Souls of Black Folk, he writes that Black people feel ‘’twoness . . . two souls, two thoughts, two unreconciled strivings, two warring ideals in one dark body…’5 It is the struggle between our view of ourselves, versus the dominant racial narrative. Dubois was speaking to people sharing an African history and heritage, of course, and in that context, he also addressed White supremacy and its implication in such double consciousness. In Singapore, Chinese supremacy and institutionalized racism against minorities have resulted in a similar double consciousness. We constantly think about and cater to Chinese people, as they have institutionalized power. In Singapore, the government regularly emphasizes the need for the different ethnic groups to stay in touch with their cultures and traditions, so it is not just Chinese supremacy itself that’s responsible. Personally, I think they don’t actually object to Indians and Malays giving up their cultures; on the contrary: they would probably love it if many of us gave up our cultures to assimilate through marriage or learning Mandarin, for example. The government finds Malay culture a hindrance to its economic growth and would like spread more ‘Chinese’ attitudes of hard work and personal drive. I think the government also wants Chinese people to be steeped in their traditions and are afraid of encroaching westernization. It only cares about keeping minorities’ traditions as long as they are a marketable tourism commodity, but not because they are valuable on their own. The government needs to keep up its multi-racial facade for tourists, who feel like coming to Singapore means that they can access authentic Chinese, Malay and Indian culture, all in the same place.

    AK: Interesting. Dubois talks about ‘twoness’ in relation to race. How would this be further refined in relation to gender? Can you describe the difference between being an Indian Singaporean man and a woman?

    ST: Being an Indian Singaporean woman is to be at the very bottom of the totem pole. Patriarchy and ensuing male privilege means that while Indian men are discriminated against for being Indian, they are also treated better than Indian women, both by the majority Chinese community and within the Singaporean Indian community. Indian women are still fairly restricted in their movements and their lives, expected to be both the modern worker and the traditional housewife. Indian men retain their patriarchal freedom. In Singapore, the hierarchy of race puts the Chinese at the top, Malays in the middle, and Indians at the bottom. Some have argued that Indians have it better than Malays in Singapore, which I think is a valid argument, depending on context. Indians are generally better off than the Malay community in terms of education and economic status, and one might even say that their minority class privilege intersects with the majority Chinese’s.6 In 2010, the average household income for Indians was almost twice as high as in Malay households.7 There is, however, a lot more research regarding Malay marginalization.8 Because of the diasporic Tamil Hindu immigrants’ relatively high socio-economic standing, many people do not think there is discrimination against our ethnic group. What is important is that instead of seeking to compete for attention for our oppression, we study the Chinese dominated state’s specific ways of enacting it against both communities, and validate differing experiences while encouraging a new solidarity.

    As mentioned above, women of different races are treated differently. This kind of colourism and inter-POC (people of colour) policing of skin colour is not new or unique to Singapore, of course. A lot of it is internalized White supremacy: the lighter you are, the higher on the hierarchy you stand. Colourism is a serious problem within the Indian community itself, and, to a lesser extent, within the Malay community as well. White supremacy and Chinese supremacy function in combination here. Darker-skinned Indian and Malay women are constantly bombarded with messages that their skin colour makes them unattractive. Our body shapes, which are naturally curvier, are compared to skinnier, fairer Chinese women’s, and found inadequate. In such body policing, race and gender again intersect and amplify each other. The communities themselves are responsible here, but so are the state and the media. In the 2013 Singapore Miss Universe, there were no Indian or Malay women in the top twenty. Since 1966, which is when Singapore started being represented at the Miss Universe pageant, Malay or Indian women have won the title at home a grand total of four times.9 In 2014, for only the second time ever, an Indian woman won Miss Singapore Universe. She was inundated with disparaging comments on her face and skin colour online.10

    Discrimination against Indian men is mitigated by their gender. Not so women’s: whatever racial discrimination they undergo, it is made yet worse by being female. William Keng Mun Lee of the University of Lingnan argues that in Singapore, women in general are in lower-paying jobs across both core and periphery. This observation, despite the small differences in the educational standards of males and females in Singapore, leads him to theorize that this is due to structural factors such as sexism and discrimination. Interestingly, however, he says that it is also due to ‘Chinese male workers success in protecting their economic success by excluding females from high-paying jobs.’11 Chinese males, not Singaporean men in general, hold wealth and power in the core industries in Singapore. So if Chinese females are being excluded for being women, how much worse is the situation for Indian and Malay women?

    AK: Let’s talk a little bit about the concept you’ve developed, ‘Chinese privilege.’ It’s a terrific concept that can be easily used to explain social inequity in Singapore. How did you come up with the concept of Chinese privilege?

    ST: I remember the exact moment. I was reading bell hooks’ ‘Beloved Community: A World Without Racism.’ I deeply sympathized with what she was saying, even though she was speaking about a different context. I performed a simple experiment. I took a paragraph I particularly loved and I substituted the words ‘Chinese’ for ‘White.’ I read it back to myself, and the moment of realization that that paragraph could have been written about Singapore, and not the U.S., was what made me realize that racial privilege is not simply a White phenomenon. I don’t mean that I never realized it before, only that I had lacked the language to express it in a way that wholly encompassed the experience not as singular, but as universal to minorities here. In Killing Rage: Ending Racism, hooks speaks of ‘supremacist attitudes that permeate every aspect of […] culture’ while ‘most white folks unconsciously absorbing the ideology of white supremacy […] do not realize this socialisation is taking place [… and] feel they are not racist.’12

    Now, I am not going to make the claim that hooks’s ideas are wholly transferable to the Singaporean context. That would be an undue appropriation of the African American experience and erase the specificity of their oppression. But there are enough similarities for me to associate the two phenomena in my mind: the daily microaggressions that minorities experience, employment discrimination, the paradoxical, simultaneous derision and appropriation of their culture.

    While I realize that the concept of White privilege has its own context and history, it really helped me understand the situation in Singapore by analogy. Chinese Supremacist attitudes permeate our society. The PAP believes in keeping the Chinese and their Confucian ethic at the helm, supposedly for our economic growth and success. So-called Special Assistance Plan schools, where all taxpayers’ money pays for Chinese students’ opportunities only, with the argument that this practice enables better trade with China in the future.13 The media constantly laud China as the world’s next superpower, even though economists predict its one-child policy will cause it to fall behind an ever-burgeoning Indian state. And the state continues to make racist comments such as the following: ‘We could not have held the society together if we had not made adjustments to the system that gives the Malays, although they are not as hardworking and capable as the other races, a fair share of the cake.’14 Religion, specifically Islam, is not spared from racist attacks: ‘In those days, you didn’t have a school tuckshop, so you bought two cents of nasi lemak and you ate it. And there was a kway teow man and so on. But now, you go to schools with Malay and Chinese, there’s a halal and non-halal segment and so too, the universities. And they tend to sit separately, not to be contaminated. All that becomes a social divide. Now I’m not saying right or wrong, I’m saying that’s the demands of the religion but the consequences are a veil across and I think it was designed to be so. Islam is exclusive.’15

    Chinese people do not see such comments as racist. Most people see it as normal–common wisdom. If minorities ever raise their voices, they are told to shut up and sit down.

    I started doing a similar analogy exercise with other texts after my experience with bell hooks. In Privilege, Power and Difference, Allan G. Johnson says:

    Being able to command the attention of lower-status individuals without having to give it in return is a key aspect of privilege. African Americans for example, have to pay close attention to whites and white culture and get to know them well enough to avoid displeasing them, since whites control jobs, schools, government, the police, and most other resources and sources of power. White privilege gives little reason to pay attention to African Americans or how White privilege affects them.16

    If you pay attention to minorities in Singapore, the analogy rings so true. We know about Chinese culture, some of us learn Mandarin to make ourselves more employable, we try to understand how the Chinese work, we give in to them when they speak Mandarin around us, never asking them to be sensitive towards us. We know that knowledge of them will help us; they, on the other hand, know very little about our cultures, religions or languages. They do not have to: not knowing it does not affect them in a material way. Reading about the African American experience triggered these important insights about our own situation for me.

    AK: Could you say a little more about how you define Chinese privilege? Does Chinese privilege take place around the world, or is it specific to Singapore?

    ST: I define Chinese privilege similarly as White privilege, again by analogy rather than wholesale transference of one distinct historical context to another. White privilege is invisible and normal to those who have it, which makes it hard to discuss because people rarely see how they are being privileged. It goes beyond advantages people enjoy because of their race. It is also the unearned power the system confers by virtue of your race alone. It is a set of institutional benefits, with greater access to power and resources and opportunity, that are closed off to those who do not have it. In the same vein, these advantages are bestowed upon Chinese Singaporeans, regardless of any other intersectional identity they carry. By virtue of being Chinese in Singapore, they start life on a higher place in the scale as compared to minorities. They are the beneficiaries of a system of racial superiority, which is why when I talk about the country I call it a Chinese Supremacist state.

    Many see Chinese privilege in Singapore as the root cause of Singapore’s economic strength. Lee Kuan Yew is the only man to have ever held three political titles in the government. That alone should signal his significance. He was Singapore’s first Prime Minister, and as such, the chief architect behind modern Singapore. He later became Senior Minister, a title he held until his predecessor Goh Chock Tong ascended to the position. In an attempt to continue keeping him in power, he was then given the title of Minister Mentor in 2004. He has been in power since 1959, and only stepped down in 1990, making him the world’s second longest serving head of state, after Fidel Castro. He is the man who has most impacted Singapore with his policies and his words still continue to hold enormous power and clout. In 1989, he commented that Chinese immigration from Hong Kong to Singapore was necessary, given the low birth rates amongst Chinese Singaporeans: without the Chinese ‘there will be a shift in the economy, both the economic performance and the political backdrop which makes that economic performance possible.’17 Chinese privilege means that problems within the Chinese community are framed as national crises, while problems within minority communities are blamed on culture or genetics, and left to the communities themselves to handle.

    Chinese privilege in Singapore falls into a unique category with Taiwan (and China, of course). Chinese privilege cannot exist in the U.S. or in Europe because Chinese lack institutionalized economic, social and political power in those places. In Singapore, Chinese Singaporeans have power in every facet of life; it is systemic and systematic.

    AK: For me as a Chinese Singaporean, your analysis makes a lot of sense. How does this racial concept of privilege intertwine with other intersectional oppressions, such as gender?

    ST: In 2012, a survey found that women hold just 6.9 per cent of directorships. Moreover, the joint study with advocacy group BoardAgender, found 61.3 per cent of the more than 730 companies listed on the Singapore exchange do not have a single female member on their boards.’18 The survey does not break it down further by race, making the assumption that all women in Singapore are discriminated against only on the basis of their gender, not their race. Singapore has the same gender representation as other places that tend to erase race in favor of gender. In the West, White women often stand in for ‘all women,’ even though they actually earn more than Black and Latino men in the US,19 just as Chinese women are seen as representatives of all women in Singapore, including minorities.

    Recently, an article cited a survey of Singaporean women’s under-representation on company boards. ‘Companies with more diversity in boardrooms are more profitable, but Singapore doesn’t fare so well – 56.5% of the companies surveyed had all-male board members.’20 It was a matter of much discussion. The article itself concluded that ‘we recommend empowering board nominating committees to cast their net wider and pro-actively look for women candidates.’ However, the article also mentions that ‘59% of the boards were of single ethnicity.’ No discussions, no conversations online or in the mainstream media ensued about this, and the article does not even seem to pick up on the potential impact for minorities, let alone minority women. If women’s rights groups such as AWARE (Association of Women for Action and Research) are solely focussed on gender representation, not gender in conjunction with racial imbalance, do we need to wonder why minorities on company boards are so few in number? Who are the women actually being represented here? Clearly, Chinese women are the default here. Given the intersection of gender and race, Indian and Malay women are at a double disadvantage. But that conversation does not happen.

    Feminism in Singapore is about making Chinese women equal to Chinese men, not about equality for all women. Dismantling the Chinese patriarchal structure itself would mean that Chinese women would have to give up their racial power and privilege, too, and they do not want to do that. Chinese women need to realise that they actually have better opportunities than many minority men here. As minority women, we are far more attuned to racism and sexism than Chinese women are, because we fight both those intersections every day, and we see how we are treated not as women, but specifically as Malay or Indian women.

    In the recent Singapore Literature Prize awards, all the winners were male, and a furor about women’s exclusion from these prestigious awards broke out.21 Again, however, there is no furor over the fact that no minority person has even won the English prize for fiction. The closest call was the playwright Haresh Sharma in 1993 and the poet Alfian Sa’at, who was awarded a commendation prize in 1998 (both before the prize was categorized into languages). This year, there was not even one minority on the English short list, either for fiction or non-fiction. Chinese writers are fully represented both in the Chinese and English categories. This is what Chinese privilege looks like in everyday life. There was only one, just one, minority woman, in the entire shortlist across all three categories under the English poetry category.22 Of course, she did not win.

    Chinese women clearly realize the gender disparity in Singapore. But since they see themselves as the only women worth talking about in Singapore, they do not focus on the effects of racial discrimination against other women in Singapore.

    AK: I have seen exactly what you mean–Chinese feminists who remain silent when their minority sisters and brothers are being discriminated against. It makes me so mad. For those who are new to the concept, can you please elaborate a little bit more on the effects of Chinese privilege, and give some concrete examples about how Chinese privilege affects minorities in Singapore?

    ST: Privilege and oppression are two sides of the same coin. If one exists, the other does, too. Chinese privilege means that Singaporean minorities are oppressed. Within minority groups themselves, there are subtle differences. Light-skinned North Indians are treated marginally better than darker-skinned South Indians. The term ‘shit-skin’ is often a slur the Chinese use to describe us. This further intersects with class, as class privilege often mediates racial oppression. Higher-class Indians are treated better, and are often co-opted into Chinese supremacy, or they assimilate themselves by choice by marrying Chinese partners, etc. Those the government co-opts become exemplary tokens of our so called multi-culturalism — but they might as well be Chinese. S Dhanabalan, once almost tapped to be the next Prime Minister of Singapore, is of Indian Tamil descent and was a prominent minority in the government.23 He was supposed to represent the Tamil-Indian population in Singapore. He has a Chinese wife and is Christian, while most Indians in Singapore are Hindus. There is such a lack of proper representation of minorities in Parliament. K Shanmugam, another Minister, was instrumental in the state policing of religious Hindu expressions, such as Thaipusam,24 where he spoke on behalf of the government, all the while claiming to represent Indian Singaporeans. Elite Indians buy into the state rhetoric and enforce it against their own people. The complicity with the Chinese majority interest by those who could have done something for the community ensures farcical representation only, designed to only allow us a voice compatible with the government line.

    AK: The issue of interracial marriage is an interesting one. How do you understand Chinese privilege in relation to marriage and relationships?

    ST: In recent years, the number of interracial marriages in Singapore have risen. This is to be expected–after all, we are a multiracial country with a multitude of races and cultures. In 2012, one in five marriages was interethnic.25 Singapore prides itself on being a postracial society, and within the Indian community, there has been indeed been a strong increase of Indian men dating and marrying Chinese women. And yet, the reverse is rarely true–Chinese men do not usually date or marry Indian women. It is also important to realize that the Indian men who marry Chinese women are by and large extremely well-educated members of the higher Indian-Singaporean socioeconomic classes. Chinese women are not marrying blue-collar Indian men, but rather those considered most eligible. Again, race and class issues are intertwined here. Fanon perhaps explains this phenomenon best in Black Skin, White Masks: for Black men, relationships with white women are often about the need for recognition and indirectly, the desire for assimilation.26 I believe this is true in the Singapore context. Indian men who date Chinese women are desperate to assimilate. They instinctively realize the privilege of being Chinese, and unable to access it any other way, aspire to marry a Chinese woman. They do not have to experience racism as much when their wives’ Chinese privilege protects them, and it gives them access to opportunities that are usually reserved for Chinese people. They are effectively deracializing themselves.

    Heterosexual patriarchy is also at work here. Women are expected to marry up wherever possible. Indian women occupy the lowest rung of the Singaporean race hierarchy, and Chinese men occupy the highest. For a Chinese man to date and marry an Indian woman means to marry far beneath his status. Chinese women of a middling socio-economic class can move up a class by marrying the wealthiest indian men in the country. These Indian men, lacking racial privilege, which is itself a ‘property right’,27 can also move up the racial class through gaining access to their wives’ racial privilege. Chinese men gain nothing and lose everything by marrying an Indian girl, while Indian men gain access to racial privilege and Chinese women to class privilege by marrying rich Indian men. But what about Indian women? Singapore does not break down interracial marriages by gender, which obfuscates this racist situation, but the number of people needing to marry into Chineseness shows how powerless the minority communities really are. Indian women like me do not usually have access to the same opportunities Indian men have. Again, we observe the complex intertwinement of sexual, class, and race discrimination here, and the internal paradoxes and contradictions to official postracial, egalitarian Singaporean rhetoric are obvious.

    AK: One interesting theme repeated here is that representation is either always Chinese or White. What do you think is the relationship of Chineseness to Whiteness in Singapore?

    ST: Generally speaking, I think that Chinese Singaporeans do not seem to struggle with reconciling Whiteness and Chineseness. I believe this is the case because Chineseness is seen as equal, and in certain aspects even superior to Whiteness. Whiteness is liked, welcomed, and used as a stamp of approval, but the liberal political ‘Western values’ frequently clash with our ‘Asian values.’ Chinese people tend to see themselves as victims of White racism (while at the same time refusing to recognize their own racism regarding other minorities in Singapore, as I outlined above). White expatriates work well-paying jobs and live in the most expensive apartments in Singapore. They are treated very well everywhere they go in Singapore, because the ‘White is better’ mindset still exists here. Chineseness functions the same way in Singapore as Whiteness, sometimes even more so, since the Chinese are the true owners of power here while White people are long-time beneficiaries of that power.

    As a person of colour living in a supposedly decolonized Singapore, I would say that what makes our struggles markedly different from minorities in the West is that we have to deal with Whiteness on top of Chinese supremacy. So we experience a double racial oppression. I often say minorities here have been colonised twice, once by the British, and once again by the Chinese. What other decolonised state has a completely alien population control political and economic power, while the formerly decolonized indigenous people remain continuously marginalized? The language of Critical Race Theory can only take us so far in Singapore. We need to start coining our own terminology and framework for talking about racism in Singapore. This conversation has just only begun.

    AK: When you complicate this issue of privilege by bringing gender into the picture, how do things shift for women, regarding White privilege and Chinese privilege?

    ST: Intersections always make things complicated, especially for people who carry multiple oppressed identities, and so these shifts are difficult to quantify. White women have more privilege than Chinese women. Chinese women have more privilege than Indian and Malay women. Even among Indian and Malay women, the comparative amount of privilege is hard to pin down. Indians in Singapore are by and large Tamil, the darkest Indians from the subcontinent. Malay women are generally fairer, a light brown compared to the dark brown of most Indian women here. Due to colourism, Malay women might thus have a tad more privilege. But at the same time, this can be negated by something simple as wearing the hijab. Singapore is suspicious of Malay Muslims, and Malay women who wear the hijab are seen as conservative and oppressed. Indian women, however, are not seen as religiously fanatical, even if they are in ethnic attire, as Hinduism is not seen as the same kind of threat as Islam.

    AK: Can you talk about people who inhabit in-between racial spaces, for example people who might be of one ethnicity but can pass for another? How does racism affect them in Singapore?

    ST: Passing is a mixed bag, and it is present across all intersectionally oppressed identities. To put it simply, passing is the ability to be able to ‘pass’ as your oppressors, even though you carry an identity and occupy a space as the ‘other.’ There are many people of mixed race in Singapore, especially a group of people in Singapore called ‘Chindians’, which is a term for people who are Indian and Chinese, and who can pass for Chinese and thus have access to Chinese privilege. People like ‘Chindians’ can effectively move between the worlds of oppressors and oppressed. It is really difficult for people who pass, because they are always fighting to have their entire identities validated.

    AK: We are nearing the end of our conversation. What messages would you like to give to young minorities in Singapore?

    ST: Audre Lorde said that our silence will not protect us. This is true no matter who we are. When you are silent, you are complicit. Inaction against oppression is collusion with oppression.

    To young minorities in Singapore, I would say: you can start small. Call out Chinese people when they behave micro-aggressively. Call out our own people when we show stereotypical prejudices towards Malays, Indians and other minorities. Many Indians believe the Malays are better positioned because of their supposedly free education, even though that policy actually ended a long time ago. Malays believe the Indians are the preferred minority, because there are more high-profile and prominent Indians, and because Indians are compared favorably to Malays, to blame Malays for their alleged lack of progress. Indians are merely the token minority, there only because the state needs to have some public minorities to salvage its international reputation. Indians see Malays as having some sort of special advantage because the state protects their religion, and because they are indigenous to this part of the world. The Chinese supremacist state uses such highly problematic comparisons for its own ends. It wants to keep us from finding solidarity with each other. It wants us to be suspicious of each other. But divide- and-rule tactics only work when we buy the Chinese supremacist state’s lines of thinking and argument.

    Zora Neale Hurston said that when you are silent, they will kill you and say you enjoyed it. Every time you remain silent, they believe they have the right to treat you this way, and worse than that, that you want to be treated this way. Again, to the Singaporean youth I would say, do not be afraid, and do not be silent. This country has gone through four generations since independence, and with each, it has become less willing to talk about its serious race problems. That needs to change. The conversation needs to happen. You cannot sit back and let a few of us take all the hits. Hit us long and hard enough, and without the support from our own communities, we will inevitably cower, too. It is unconscionable for you to let others fight your oppression, while you wait to reap the rewards of what may come. Realise that we can only do this together, or we cannot do it at all.
    _____

    Notes:

    1. “National Registration Identity Card.” Wikipedia. Accessed Jan. 15, 2015. Back to the essay

    2. Ministry of Education, Singapore: Press Releases – Performance by Ethnic Group in National Examinations 2002-2011.” Oct. 29, 2012. Accessed Feb. 22, 2015. http://www.moe.gov.sg/media/press/2012/10/performance-by-ethnic-group-in.php. Back to the essay

    3. Zakir Hussain, “No Short Cut to Raising Malays’ Maths Grades,” in The Straits Times, Dec. 18, 2009. Accessed Feb. 22, 2015. http://news.asiaone.com/News/Education/Story/A1Story20091214-185790.html Back to the essay

    4. Hsien Loong Lee, “Singapore’s Four Principles Of Governance.” Civil Service College, Nov. 1, 2004. Accessed Feb. 22, 2015. https://www.cscollege.gov.sg/knowledge/ethos/ethos november 2004/pages/singapore four principles of governance.aspx Back to the essay

    5. W. E. B. DuBois, The Souls of Black Folk: Essays and Sketches (Charlottesville: University of Virginia Library, 1996), 9. Back to the essay

    6. Education Statistics Digest.” Ministry of Education, Singapore, Jan. 1, 2013. Accessed Feb. 22, 2015. http://www.moe.gov.sg/education/education-statistics-digest/files/esd-2013.pdf Back to the essay

    7. Demographics of Singapore.” Wikipedia. Accessed Jan. 14, 2015. Back to the essay

    8. See L. Rahim, The Singapore Dilemma: The Political and Educational Marginality of the Malay Community (Oxford University Press, 2001) for an excellent discussion on oppression of the Malay community. Back to the essay

    9. “Miss Singapore Universe.” Wikipedia. Accessed Jan. 16, 2015. Back to the essay

    10. Surekha Yadav, “Is Singapore a Racist Country?” Malay Mail Online, Aug. 30, 2014. Accessed Feb. 22, 2015. http://www.themalaymailonline.com/opinion/surekha-a-yadav/article/is-singapore-a-racist-country Back to the essay

    11. William Keng Mun Lee, “Gender Inequality And Discrimination In Singapore,” in Journal of Contemporary Asia 28, no. 4 (1998): 484-97. Back to the essay

    12. bell hooks, Killing Rage: Ending Racism (New York: Henry Holt, 1995), 267. Back to the essay

    13. “Special Assistance Plan.” Wikipedia. Accessed Jan. 15, 2015. Back to the essay

    14. Tom Plate, “The Fox and the Hedgehog (Not a Disney Movie),” in Giants of Asia; Conversations with Lee Kuan Yew Citizen Singapore; How to Build a Nation. 2nd ed. (Singapore: Marshall Cavendish International [Asia] Ptd, 2013), 61. Back to the essay

    15. Kuan Yew Lee and Fook Kwang Han, Lee Kuan Yew: Hard Truths to Keep Singapore Going, 1st ed. (Singapore: Straits Times, 2011), 230. Back to the essay

    16. Allan G. Johnson, Privilege, Power, and Difference. 2nd ed. (Boston, Mass.: McGraw-Hill, 2006), 24. Back to the essay

    17. Sudhir Thomas Vadaketh, Floating on a Malayan Breeze Travels in Malaysia and Singapore (Singapore: NUS Press, 2012), 194. Back to the essay

    18. Joe Havely, “Singapore Lags in Board Diversity,” Singapore Lags in Board Diversity. Think Business, National University of Singapore, Business School, Mar. 7, 2012. Accessed Feb. 22, 2015. http://thinkbusiness.nus.edu/articles/item/7-singapore-boardroom-diversity Back to the essay

    19. Derek Thompson, “The Workforce Is Even More Divided by Race Than You Think,” in The Atlantic, Nov. 6, 2013. Accessed Jan. 15, 2015. Back to the essay

    20. Yen Nee Lee, “Companies with More Diverse Boards Fare Better: Study.” TODAY Online, Sept. 29, 2014. Accessed Feb. 22, 2015. http://tablet.todayonline.com/business/companies-more-diverse-boards-fare-better-study Back to the essay

    21. Corrie Tan, “Gender Bias Allegations over Singapore Literature Prize English Poetry Results,” Books News & Top Stories, in The Straits Times, Nov. 6, 2014. Accessed Feb. 23, 2015. http://www.straitstimes.com/lifestyle/books/story/gender-bias-allegations-over-singapore-literature-prize-english-poetry-results Back to the essay

    22. “Singapore Literature Prize,” Wikipedia. Accessed Jan. 16, 2015. Back to the essay

    23. “S. Dhanabalan,” Wikipedia. Accessed Feb. 23, 2015. Back to the essay

    24. “The Uproar Over Thaipusam.” The Online Citizen, Jan. 21, 2011. Accessed Feb. 23, 2015. http://www.theonlinecitizen.com/2011/01/the-uproar-over-thaipusam/ Back to the essay

    25. Theresa Tan, “More Mixed Unions, Remarriages Based on Latest Marriage Data,” in The Sunday Times, Sept. 30, 2012, Special Reports section. Back to the essay

    26. Frantz Fanon, “The Man of Color and the White Woman,” in Black Skin, White Masks (New York: Grove, 2008), 45-60. Back to the essay

    27. Cheryl I. Harris, “Whiteness as Property,” in Harvard Law Review 106, no. 8 (1993): 1707-791. Back to the essay

  • Abecedarium Anthology: The Cambridge Introduction to Edward W. Said

    Abecedarium Anthology: The Cambridge Introduction to Edward W. Said

    a review by Reshmi Mukherjee
    ~
    Connor McCarthy presents a crisp and detailed overview of Edward W. Said’s life, scholarship, interdisciplinary training, and critical thought processes, for the novice readers of his works. Additionally, the use of simple language and lucid sentence construction has the potential to attract audiences from non-literary backgrounds as well. These readers may be interested in knowing what Michael Sprinker called “the very ideal of the cosmopolitan intellectual that remains so central to the humanities’ self-image to this day.”1 Therefore this book is unlike most critical enquiry of Said’s works in that it caters to readers across disciplinary boundaries.

    The content of the book is not new but the form, narrative technique, is Saidian in nature. McCarthy, an ardent critic of Said, analyzes his written works in relation to “the events and circumstances entailed by and expressed in it.”2 Illustrating the relationship of a critic to the text, as explained in Said’s The World, the Text, and the Critic, McCarthy reads Said’s literary, political, and critical works as one continuous narrative, and in relation to the key terms of filiation and affiliation. By filiation, Said means the writer’s natural and organic connection by “inherited location.” And affiliation is a “network of relationships that human beings make consciously […] often to replace the loss of filiative relations in modern society.”3 A writer’s work, the text, therefore is a conglomeration of both filiative and affiliative connections hence, a “worldly” phenomenon. Accordingly, McCarthy situates Said’s identity as a scholar and humanist as intrinsically connected to his socio-political and cultural reality.

    The book is divided into four chapters: 1) Introduction, life, work, 2) Influences, 3) Works, and 4) Reception. The introduction covers the itinerary of Said’s life including the obsequies paid after his death on 25th September 2003. In so doing McCarthy gives an insight to the complex historical, and filial conjuncture that shaped Said’s persona including his anxiety of being exiled and nation-less, a sentiment that is echoed in his literary works, critical thinking, and political engagement with the Palestinian cause. Alongside, this section pays special attention to Said’s childhood and adolescence as oscillating between different emotional conditions. Said was vexed with contrasting but demanding parents, a constant need to please them, displacement and relocation from Jerusalem to Cairo and then to the United States, and negotiating the paradoxical meaning of his name, which he called “foolishly English.” Parts of this section reiterate Said’s memoir Out of Place but all the information is relevant for readers to understand Said’s “innate sense of a divided but reflexive self.4

    Jean-Léon_Gérôme_003

    The second chapter explains the polarized opinions about Said’s academic work. In so doing, McCarthy helps the readers understand Said’s works and his thinking processes. Reviewing the sheer volume and depth of Said’s scholarship, detailing the different schools of thought like Romance philology, Marxism, phenomenology, structuralism, poststructuralism, musicology etc. that influenced him, McCarthy notes that Said did not accept all arguments unconditionally. While Said was influenced by these discourses, he questioned their methodology and application in the real world, while resisting any easy disciplinary categorization of his works. In particular, McCarthy’s reading focuses on Said’s complex relationships with Western Marxist tradition and post-structuralism. While Said was critical of Marxism’s rigid adherence to putatively radical theoretical position and inverse conservatism, he drew inspiration from Marxists George Lukacs, Theodre Adorno, and Antonio Gramsci. Their concepts of “methodological trap,” “absolute resistance to reification and the alienation of consciousness under industrial capitalism,” and “hegemony” continued to inspire his work till the very end.5 His relationship with Adorno, especially towards the end of his life, became more of an aesthetic experience, while Gramsci continued to influence his theoretical acumen. This section in the book is theoretically appealing as it epitomizes one of the basic arguments in Orientalism. It explains Said’s idea of the cultural creation of hegemony via Gramsci’s sense of materiality of culture and ideas. For example, Said in Orientalism notes, “It is hegemony, or rather the result of cultural hegemony at work, that gives Orientalism… [its] durability and strength.”6 By “work” Said here refers to the political elite society, in the Gramscian sense of the term, which retains power by manipulating public opinion. McCarthy further exemplifies that Said was also enthused by Gramsci’s notion of the organic intellectual whose job it is to forge hegemony. Consequently, Said believed in his position as an organic/public intellectual and enabler of how new socio-political movements intervene in the public sphere.

    Like Gramsci, French poststructuralist Michel Foucault also influenced Said’s works. In fact Said was one of the “major mediators of Foucault’s thought into the American academy.”7 In Orientalism, Said explains the discursive use of power that shaped knowledge about the non-west, through Foucault’s theory of power and knowledge. Said defines knowledge as part of an underlying master-code or structure, and man is constituted via these discursive practices. However, in his later works, Said challenged Foucault’s notion of power in his 1984 commemorative essay on Foucault’s death “Michel Foucault,” and McCarthy focuses on that aspect in the third chapter.

    The second half of this chapter highlights two aspects of Said’s personality, those being his “dialectical and paradoxical” relationship with Joseph Conrad, and admiration and empathy for Erich Auerbach. His fascination for Conrad is so strong that he does not emphasize Conrad’s relationship with the empire; rather, he is interested in Conrad, the exiled intellectual and writer, whose life was full of unresolved tensions. The reason being, Conrad’s personal experience of exile, complex life choices, and lingering sense of alienation, echoed some of the problems that Said encountered as a writer. Said admired Auerbach for similar reasons and for writing Mimesis at the time of his exile from Europe. Auerbach’s exile, alienation, and loneliness coupled with his “profound knowledge” left a permanent impression on him. It is from Auerbach’s experience that Said negotiated his own pain of being in exile as a necessary process that enables critical thinking.

    The third chapter discusses Said’s select works in detail. It gives a fresh insight into pedagogical and methodological aspects of writing a text. McCarthy carefully unfolds Said’s theorization of text, critic, writer, discourse, power, knowledge, hegemony, as critical categories for analysis. In Beginnings, McCarthy explains, Said paid particular attention to the text, writer, and intellectual’s role in the public domain. Accordingly, the intentional production of meaning in the beginning of a text is argued as the most important function of a text. At this juncture in the text, to ratify Said’s position, McCarthy reiterates his life long commitment about connecting the writing of a text, a performative action, to its reality, and the intellectual’s role as a public persona.

    CLK339940

    In discussing Orientalism McCarthy elaborates Said’s analysis of western representation of the non-west via a hierarchical power structure that led to knowledge production about the other. However, the most essential aspect of McCarthy’s analysis here is his emphasis on Orientalism, not as a text on the Middle East but, as a discursive practice that, even if Said refused such compartmentalization while assessing the relevance of this book, changed the direction of postcolonial studies. The Question of Palestine is examined in continuation with Orientalism while the meaning of “Zionism from the Standpoint of its Victims” is discussed in great depth. McCarthy sheds light on the fact that Said is writing back to offer “an analysis of Zionism from a position” that was long silenced in accounts of “Whig history.” 8 This chapter explains the socio-political, historical, and economic reasons that led to the formation of Israel and explains Said’s statement “benefits for Jews and none for non-Jews in Palestine.” Despite Said’s scathing critique of Zionism, McCarthy directs the readers’ attention to the fact that The Question of Palestine does not delegitimize the Jews historical claim to Palestine. Rather, Said is opposed to the conditions for the fulfillment of this claim i.e. the dispossession of the Palestinian people. Therefore, he writes to remind the Zionists that their claim is intertwined with Palestinians and Palestinian history. McCarthy’s particular emphasis on this section from the book is relevant because it positions Said as an academic intellectual and human rights activist connecting the events with historical data, and not a “professor of terror” (as he was accused by Commentary magazine journalist Edward Alexander). The discussion on The World, the Text, and the Critic ends with Said’s criticism of Foucault’s theorization of power and discourse. In this segment, McCarthy mentions Said’s criticism of Foucault’s passive onto-phenomenological (how and why) questions about power, his ethnocentrism, and inability to explain why “the abrupt change [in power] occurs between one episteme and the next.”9

    The book’s final chapter notes the reception of Said’s text, Orientalism, among the Anglo-American academic scholars of postcolonial and culture studies. Even though this section begins with anthropologist James Clifford’s complex reading of Said’s use of “Foucauldian ideas in the service of his humanist, cosmopolitan project,” critical commentaries by doyens in these fields namely Paul Bové, Robert Young, and Aijaz Ahmed are the main focus.10 McCarthy notes both Bové’s and Young’s criticism is geared towards Said’s failure to effectively employ poststructuralist ideas and “carry them to their logical conclusion.”11 Bové’s critique of Orientalism is concerned with Said’s use of Foucault’s theory of power and not extending its use in the production of knowledge system. While he credits Said with a detailed picture of the voluntary and involuntary complicity of orientalism vis-à-vis imperialist power, Bové faults Said for failing to situate power within the “entire economy [where] both Orientalist and Saidian ‘oppositional’ work is produced.”12 In so doing, Bové sides with Foucault who argued against the intellectual’s role in revolutionary change. According to Foucault, institutions discursively shape intellectuals who are “already always hemmed in by and even complicit with power.”13 Said, however, believed in the intellectual’s social role and, while agreeing with Foucault’s theory of power, downplayed its relationship with knowledge that shaped prominent and institutionally powerful intellectuals. Therefore, Bové’s main critique of Orientalism is that, it is critical of power “but not critical enough.”14

    Robert Young’s criticism of Orientalism is based on Said’s theory against orientalist discourse and for an “alternative knowledge of the Orient.”15 Young argues, if the success of Orientalism lay in its strict “monopolization of linguistic codes to represent the Orient,” is it possible or desirable to have another form of knowledge system?16 If all knowledge is mitigated via a stringent power structure, will anti-Orientalist discourse not repeat the same mistake it wishes to castigate? By contrast, Marxist economist Aijjaz Ahmad takes a different position in his criticism of Orientalism. He has accused Said of first rethinking of history and second, using poststructuralism as a way to escape Marxist tradition. He compares poststructuralist anti-realism to fascist thinking and concludes that Said represents anti-humanist American scholarship that dominates the world today. It is connected and aids in the smooth functioning of “unprecedented imperialist consolidations of the present decade.”17 Therefore, Said, for Ahmad, is a native informant and Orientalism is a “crucial ideological wedge into [the Anglo-American academy] for Asian immigrant intellectuals.”18 McCarthy however, towards the end of this section, points out Young’s and Ahmad’s purposeful misreading of Orientalism. He reminds the readers about Said’s response to critics such as Young and stresses the fact that Orientalism is about “fragmenting, dissociating, dislocating, and decentering of the experiential terrain covered at present by universalizing historicism.”19 Said never intended it to be a book about the Orient or to construe an alternative history. In response to Ahmad, McCarthy faults Ahmad’s “polemical aggression” for clouding his argument, as McCarthy notes there is no historical evidence or sociological data to identify North American audience and readership of Orientalism.

    As mentioned earlier, McCarthy has painstakingly traced Edward Said’s life and intellectual journey. However, the only flaw in this book is the lack of literature on Said’s political engagement as part of the public intellectual’s ethical responsibility. Said’s scholarly contributions and academic position were closely related to his roles as a practicing member of multiple literary, critical, and political constituencies. Indeed, without mentioning this side of Said, his contribution to the world will remain half known. Gayatri Chakraborty Spivak, in an interview, has said that Edward Said was a Kantian Enlightened subject/scholar “who writes for all time and all people.”20 This is true because later in his life, and he has written about it in After the Last Sky, Said became deeply concerned with the Palestinian subaltern. He attempted to “change and form public opinion with well-researched commentary on political moves by involving highest level of political intervention and talented musicians in international collaboration.”21 Especially after the failure of the OSLO peace Said believed in other avenues to harbor a non-violent yet beneficial dialogue between Palestine and Israel. In 1997 he collaborated with Daniel Barenboim, the Israeli musician, and organized a musical concert in West Jerusalem. Said’s use of music to enable peace process between Palestine and Israel is worthy of mention because he believed that he real contribution of artists and philosophers is that they can change minds. Mentioning these aspects of Said’s public intellectual persona would have added to the richness of the book and provided a much wider spectrum of Said’s life.

    _____

    Reshmi Mukherjee (PhD. University of Illinois) is visiting assistant professor of English and interim-Director of Gender Studies at Boise State University. Her research and teaching interests include transnational feminisms, Anglophone literatures, Anglophone Arab fiction, Literature in translation [especially francophone literature], diasporic and exilic literatures, and subaltern theory. Her most recent publication is titled: “Living in Subalternity: The Becoming of the Subaltern in Bessie Head’s A Woman Alone, A Gesture of Belonging, and When Rain Clouds Gather.” It was published in the Journal of the African Literature Association, (JALA) Vol 7. No. 2, Spring 2014.

    _____

    Notes:

    1. Michael, Sprinker, “Introduction,” in Edward Said: A Critical Reader, edited by Michael Sprinker (Massachusetts:
    Blackwell Publisher, 1992), 1. Back to the essay

    2. Conor, McCarthy, The Cambridge Introduction to Edward Said (Cambridge: Cambridge University Press), 97. Back to the essay

    3. Ibid. 100. Back to the essay

    4. Ibid. 9. Back to the essay

    5. Ibid. 33, 34, 35. Back to the essay

    6. Ibid. 37. Back to the essay

    7. Ibid. 48. Back to the essay

    8. Ibid. 85, 86. Back to the essay

    9. Ibid. 105. Back to the essay

    10. Ibid. 126. Back to the essay

    11. Ibid. 132. Back to the essay

    12. Ibid. Back to the essay

    13. Ibid. 129. Back to the essay

    14. Ibid. 129. Back to the essay

    15. Ibid. 130. Back to the essay

    16. Ibid. Back to the essay

    17. Ibid. 134. Back to the essay

    18. Ibid. 135. Back to the essay

    19. Ibid. 137. Back to the essay

    20. Ben Conisbee Baer, “Edward Said Remembered on September 11, 2004. A Conversation with Gayatri Spivak,” in Edward Said: A Legacy of Emancipation and Representation, edited by Adel Iskandar and Hakem Rustom (Oakland: University of California Press, 2010), 57. Back to the essay

    21. Ibid. Back to the essay

  • Cultivating Reform and Revolution

    Cultivating Reform and Revolution

    The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism (Duke University Press, 2013)a review of William E. Connolly, The Fragility of Things: Self-Organizing Processes, Neoliberal Fantasies, and Democratic Activism (Duke University Press, 2013)
    by Zachary Loeb
    ~

    Mountains and rivers, skyscrapers and dams – the world is filled with objects and structures that appear sturdy. Glancing upwards at a skyscraper, or mountain, a person may know that these obelisks will not remain eternally unchanged, but in the moment of the glance we maintain a certain casual confidence that they are not about to crumble suddenly. Yet skyscrapers collapse, mountains erode, rivers run dry or change course, and dams crack under the pressure of the waters they hold. Even equipped with this knowledge it is still tempting to view such structures as enduringly solid. Perhaps the residents of Lisbon, in November of 1755, had a similar faith in the sturdiness of the city they had built, a faith that was shattered in an earthquake – and aftershocks – that demonstrated all too terribly the fragility at the core of all physical things.

    The Lisbon earthquake, along with its cultural reverberations, provides the point of entry for William E. Connolly’s discussion of neoliberalism, ecology, activism, and the deceptive solidness of the world in his book The Fragility of Things. Beyond its relevance as an example of the natural tremors that can reduce the built world into rubble, the Lisbon earthquake provides Connolly (the Krieger-Eisenhower Professor of Political Science at the Johns Hopkins University), a vantage point from which to mark out and critique a Panglossian worldview he sees as prominent in contemporary society. No doubt, were Voltaire’s Pangloss alive today, he could find ready employment as an apologist for neoliberalism (perhaps as one of Silicon Valley’s evangelists). Like Panglossian philosophy, neoliberalism “acknowledges many evils and treats them as necessary effects” (6).

    Though the world has changed significantly since the mid-18th century during which Voltaire wrote, humanity remains assaulted by events that demonstrate the world’s fragility. Connolly councils against the withdrawal to which the protagonists of Candide finally consign themselves while taking up the famous trope Voltaire develops for that withdrawal; today we “cultivate our gardens” in a world in which the future of all gardens is uncertain. Under the specter of climate catastrophe, “to cultivate our gardens today means to engage the multiform relations late capitalism bears to the entire planet” (6). Connolly argues for an “ethic of cultivation” that can show “both how fragile the ethical life is and how important it is to cultivate it” (17). “Cultivation,” as developed in The Fragility of Things, stands in opposition to withdrawal. Instead it entails serious, ethically guided, activist engagement with the world – for us to recognize the fragility of natural, and human-made, systems (Connolly uses the term “force-fields”) and to act to protect this “fragility” instead of celebrating neoliberal risks that render the already precarious all the more tenuous.

    Connolly argues that when natural disasters strike, and often in their wake set off rippling cascades of additional catastrophes, they exemplify the “spontaneous order” so beloved by neoliberal economics. Under neoliberalism, the market is treated as though it embodies a uniquely omniscient, self-organizing and self-guiding principle. Yet the economic system is not the only one that can be described this way: “open systems periodically interact in ways that support, amplify, or destabilize one another” (25). Even in the so-called Anthropocene era the ecosystem, much to humanity’s chagrin, can still demonstrate creative and unpredictable potentialities. Nevertheless, the ideological core of neoliberalism relies upon celebrating the market’s self-organizing capabilities whilst ignoring the similar capabilities of governments, the public sphere, or the natural world. The ascendancy of neoliberalism runs parallel with an increase in fragility as economic inequality widens and as neoliberalism treats the ecosystem as just another profit source. Fragility is everywhere today, and though the cracks are becoming increasingly visible, it is still given – in Connolly’s estimation – less attention than is its due, even in “radical theory.” On this issue Connolly wonders if perhaps “radical theorists,” and conceivably radical activists, “fear that coming to terms with fragility would undercut the political militancy needed to respond to it?” (32). Yet Connolly sees no choice but to “respond,” envisioning a revitalized Left that can take action with a mixture of advocacy for immediate reforms while simultaneously building towards systemic solutions.

    Critically engaging with the thought of core neoliberal thinker and “spontaneous order” advocate Friedrich Hayek, Connolly demonstrates the way in which neoliberal ideology has been inculcated throughout society, even and especially amongst those whose lives have been made more fragile by neoliberalism: “a neoliberal economy cannot sustain itself unless it is supported by a self-conscious ideology internalized by most participants that celebrates the virtues of market individualism, market autonomy and a minimal state” (58). An army of Panglossian commentators must be deployed to remind the wary watchers that everything is for the best. That a high level of state intervention may be required to bolster and disseminate this ideology, and prop up neoliberalism, is wholly justified in a system that recognizes only neoliberalism as a source for creative self-organizing processes, indeed “sometimes you get the impression that ‘entrepreneurs’ are the sole paradigms of creativity in the Hayekian world” (66). Resisting neoliberalism, for Connolly, requires remembering the sources of creativity that occur outside of a market context and seeing how these other systems demonstrate self-organizing capacities.

    Within neoliberalism the market is treated as the ethical good, but Connolly works to counter this with “an ethic of cultivation” which works not only against neoliberalism but against certain elements of Kant’s philosophy. In Connolly’s estimation Kantian ethics provide some of the ideological shoring up for neoliberalism, as at times “Kant both prefigures some existential demands unconsciously folded into contemporary neoliberalism and reveals how precarious they in fact are. For he makes them postulates” (117). Connolly sees a certain similarity between the social conditioning that Kant saw as necessary for preparing the young to “obey moral law” and the ideological conditioning that trains people for life under neoliberalism – what is shared is a process by which a self-organizing system must counter people’s own self-organizing potential by organizing their reactions. Furthermore “the intensity of cultural desires to invest hopes in the images of self-regulating interest within markets and/or divine providence wards off acknowledgment of the fragility of things” (118). Connolly’s “ethic of cultivation” appears as a corrective to this ethic of inculcation – it features “an element of tragic possibility within it” (133) which is the essential confrontation with the “fragility” that may act as a catalyst for a new radical activism.

    In the face of impending doom neoliberalism will once more have an opportunity to demonstrate its creativity even as this very creativity will have reverberations that will potentially unleash further disasters. Facing the possible catastrophe means that “we may need to recraft the long debate between secular, linear, and deterministic images of the world on the one hand and divinely touched, voluntarist, providential, and/or punitive images on the other” (149). Creativity, and the potential for creativity, is once more essential – as it is the creativity in multiple self-organizing systems that has created the world, for better or worse, around us today. Bringing his earlier discussions of Kant into conversation with the thought of Whitehead and Nietzsche, Connolly further considers the place of creative processes in shaping and reshaping the world. Nietzsche, in particular, provides Connolly with a way to emphasize the dispersion of creativity by removing the province of creativity from the control of God to treat it as something naturally recurring across various “force-fields.” A different demand thus takes shape wherein “we need to slow down and divert human intrusions into various planetary force fields, even as we speed up efforts to reconstitute the identities, spiritualities, consumption practices, market faiths, and state policies entangled with them” (172) though neoliberalism knows but one speed: faster.

    An odd dissonance occurs at present wherein people are confronted with the seeming triumph of neoliberal capitalism (one can hear the echoes of “there is no alternative”) and the warnings pointing to the fragility of things. In this context, for Connolly, withdrawal is irresponsible, it would be to “cultivate a garden” when what is needed is an “ethic of cultivation.” Neoliberal capitalism has trained people to accept the strictures of its ideology, but now is a time when different roles are needed; it is a time to become “role experimentalists” (187). Such experiments may take a variety of forms that run the gamut from “reformist” to “revolutionary” and back again, but the process of such experimentation can break the training of neoliberalism and demonstrate other ways of living, interacting, being and having. Connolly does not put forth a simple solution for the challenges facing humanity, instead he emphasizes how recognizing the “fragility of things” allows for people to come to terms with these challenges. After all, it may be that neoliberalism only appears so solid because we have forgotten that it is not actually a naturally occurring mountain but a human built pyramid – and our backs are its foundation.

    * * *

    In the “First Interlude,” on page 45, Connolly poses a question that haunts the remainder of The Fragility of Things, the question – asked in the midst of a brief discussion of the 2011 Lars von Trier film Melancholia – is, “How do you prepare for the end of the world?” It is the sort of disarming and discomforting question that in its cold honesty forces readers to face a conclusion they may not want to consider. It is a question that evokes the deceptively simple acronym FRED (Facing the Reality of Extinction and Doom). And yet there is something refreshing in the question – many have heard the recommendations about what must be done to halt climate catastrophe, but how many believe these steps will be taken? Indeed, even though Connolly claims “we need to slow down” there are also those who, to the contrary, insist that what is needed is even greater acceleration. Granted, Connolly does not pose this question on the first page of his book, and had he done so The Fragility of Things could have easily appeared as a dismissible dirge. Wisely, Connolly recognizes that “a therapist, a priest, or a philosopher might stutter over such questions. Even Pangloss might hesitate” (45); one of the core strengths of The Fragility of Things is that it does not “stutter over such questions” but realizes that such questions require an honest reckoning. Which includes being willing to ask “How do you prepare for the end of the world?”

    William Connolly’s The Fragility of Things is both ethically and intellectually rigorous, demanding readers perceive the “fragility” of the world around them even as it lays out the ways in which the world around them derives its stability from making that very fragility invisible. Though it may seem that there are relatively simple concerns at the core of The Fragility of Things Connolly never succumbs to simplistic argumentation – preferring the fine-toothed complexity that allows moments of fragility to be fully understood. The tone and style of The Fragility of Things feels as though it assumes its readership will consist primarily of academics, activists, and those who see themselves as both. It is a book that wastes no time trying to convince its reader that “climate change is real” or “neoliberalism is making things worse,” and the book is more easily understood if a reader begins with at least a basic acquaintance with the thought of Hayek, Kant, Whitehead, and Nietzsche. Even if not every reader of The Fragility of Things has dwelled for hours upon the question of “How do you prepare for the end of the world?” the book seems to expect that this question lurks somewhere in the subconscious of the reader.

    Amidst Connolly’s discussions of ethics, fragility and neoliberalism, he devotes much of the book to arguing for the need for a revitalized, active, and committed Left – one that would conceivably do more than hold large marches and then disappear. While Connolly cautions against “giving up” on electoral politics he does evince a distrust for US party politics; to the extent that Connolly appears to be a democrat it is a democrat with a lowercase d. Drawing inspiration from the wave of protests in and around 2011 Connolly expresses the need for a multi-issue, broadly supported, international (and internationalist) Left that can organize effectively to win small-scale local reforms while building the power to truly challenge the grip of neoliberalism. The goal, as Connolly envisions it, is to eventually “mobilize enough collective energy to launch a general strike simultaneously in several countries in the near future” even as Connolly remains cognizant of threats that “the emergence of a neofascist or mafia-type capitalism” can pose (39). Connolly’s focus on the, often slow, “traditional” activist strategies of organizing should not be overlooked, as his focus on mobilizing large numbers of people acts as a retort to a utopian belief that “technology will fix everything.” The “general strike” as the democratic response once electoral democracy has gone awry is a theme that Connolly concludes with as he calls for his readership to take part in helping to bring together “a set of interacting minorities in several countries for the time when we coalesce around a general strike launched in several states simultaneously” (195). Connolly emphasizes the types of localized activism and action that are also necessary, but “the general strike” is iconic as the way to challenge neoliberalism. In emphasizing the “the general strike” Connolly stakes out a position in which people have an obligation to actively challenge existing neoliberalism, waiting for capitalism to collapse due to its own contradictions (and trying to accelerate these contradictions) does not appear as a viable tactic.

    All of which raises something of prickly question for The Fragility of Things: which element of the book strikes the reader as more outlandish, the question of how to prepare for the end of the world, or the prospect of a renewed Left launching “a general strike…in the near future”? This question is not asked idly or as provocation; and the goal here is in no way to traffic in Leftist apocalyptic romanticism. Yet experience in current activism and organizing does not necessarily imbue one with great confidence in the prospect of a city-wide general strike (in the US) to say nothing of an international one. Activists may be acutely aware of the creative potentials and challenges faced by repressed communities, precarious labor, the ecosystem, and so forth – but these same activists are aware of the solidity of militarized police forces, a reactionary culture industry, and neoliberal dominance. Current, committed, activists’ awareness of the challenges they face makes it seem rather odd that Connolly suggests that radical theorists have ignored “fragility.” Indeed many radical thinkers, or at least some (Grace Lee Boggs and Franco “Bifo” Berardi, to name just two) seem to have warned consistently of “fragility” – even if they do not always use that exact term. Nevertheless, here the challenge may not be the Sisyphean work of activism but the rather cynical answer many, non-activists, give to the question of “How does one prepare for the end of the world?” That answer? Download some new apps, binge watch a few shows, enjoy the sci-fi cool of the latest gadget, and otherwise eat, drink and be merry because we’ll invent something to solve tomorrow’s problems next week. Neoliberalism has trained people well.

    That answer, however, is the type that Connolly seems to find untenable, and his apparent hope in The Fragility of Things is that most readers will also find this answer unacceptable. Thus Connolly’s “ethic of cultivation” returns and shows its value again. “Our lives are messages” (185) Connolly writes and thus the actions that an individual takes to defend “fragility” and oppose neoliberalism act as a demonstration to others that different ways of being are possible.

    What The Fragility of Things makes clear is that an “ethic of cultivation” is not a one-off event but an ongoing process – cultivating a garden, after all, is something that takes time. Some gardens require years of cultivation before they start to bear fruit.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, infrastructure and e-waste, as well as the intersection of library science with the STS field. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck. He is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay

  • Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Trickster Makes This Web: The Ambiguous Politics of Anonymous

    Hacker, Hoaxer, Whistleblower, Spy
    a review of Gabriella Coleman, Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (Verso, 2014)
    by Gavin Mueller
    ~

    Gabriella Coleman’s Hacker, Hoaxer, Whistleblower, Spy (HHWS) tackles a difficult and pressing subject: the amorphous hacker organization Anonymous. The book is not a strictly academic work. Rather, it unfolds as a rather lively history of a subculture of geeks, peppered with snippets of cultural theory and autobiographical portions. As someone interested in a more sustained theoretical exposition of Anonymous’s organizing and politics, I was a bit disappointed, though Coleman has opted for a more readable style. In fact, this is the book’s best asset. However, while containing a number of insights of interest to the general reader, the book ultimately falters as an assessment of Anonymous’s political orientation, or the state of hacker politics in general.

    Coleman begins with a discussion of online trolling, a common antagonistic online cultural practice; many Anons cut their troll teeth at the notorious 4chan message board. Trolling aims to create “lulz,” a kind of digital schadenfreude produced by pranks, insults and misrepresentations. According to Coleman, the lulz are “a form of cultural differentiation and a tool or weapon used to attack, humiliate, and defame” rooted in the use of “inside jokes” of those steeped in the codes of Internet culture (32). Coleman argues that the lulz has a deeper significance: they “puncture the consensus around our politics and ethics, our social lives and our aesthetic sensibilities.” But trolling can be better understood through an offline frame of reference: hazing. Trolling is a means by which geeks have historically policed the boundaries of the subcultural corners of the Internet. If you can survive the epithets and obscene pictures, you might be able to hang. That trolling often takes the form of misogynist, racist and homophobic language is unsurprising: early Net culture was predominantly white and male, a demographic fact which overdetermines the shape of resentment towards “newbies” (or in 4chan’s unapologetically offensive argot, “newfags”). The lulz is joy that builds community, but almost always at someone else’s expense.

    Coleman, drawing upon her background as an anthropologist, conceptualizes the troll as an instantiation of the trickster archetype which recurs throughout mythology and folklore. Tricksters, she argues, like trolls and Anonymous, are liminal figures who defy norms and revel in causing chaos. This kind of application of theory is a common technique in cultural studies, where seemingly apolitical or even anti-social transgressions, like punk rock or skateboarding, can be politicized with a dash of Bakhtin or de Certeau. Here it creates difficulties. There is one major difference between the spider spirit Anansi and Coleman’s main informant on trolling, the white supremacist hacker weev: Anansi is fictional, while weev is a real person who writes op-eds for neo-Nazi websites. The trickster archetype, a concept crafted for comparative structural analysis of mythology, does little to explain the actually existing social practice of trolling. Instead it renders it more complicated, ambiguous, and uncertain. These difficulties are compounded as the analysis moves to Anonymous. Anonymous doesn’t merely enact a submerged politics via style or symbols. It engages in explicitly political projects, complete with manifestos, though Coleman continues to return to transgression as one of its salient features.

    The trolls of 4chan, from which Anonymous emerged, developed a culture of compulsory anonymity. In part, this was technological: unlike other message boards and social media, posting on 4chan requires no lasting profile, no consistent presence. But there was also a cultural element to this. Identifying oneself is strongly discouraged in the community. Fittingly, its major trolling weapon is doxing: revealing personal information to facilitate further harassment offline (prank calls, death threats, embarrassment in front of employers). As Whitney Phillips argues, online trolling often acts as a kind of media critique: by enforcing anonymity and rejecting fame or notoriety, Anons oppose the now-dominant dynamics of social media and personal branding which have colonized much of the web, and threaten their cherished subcultural practices, which are more adequately enshrined in formats such as image boards and IRC. In this way, Anonymous deploys technological means to thwart the dominant social practices of technology, a kind of wired Luddism. Such practices proliferate in the communities of the computer underground, which is steeped in an omnipresent prelapsarian nostalgia since at least the “eternal September” of the early 1990s.

    HHWS’s overarching narrative is the emergence of Anonymous out of the cesspits of 4chan and into political consciousness: trolling for justice instead of lulz. The compulsory anonymity of 4chan, in part, determined Anonymous’s organizational form: Anonymous lacks formal membership, instead formed from entirely ad hoc affiliations. The brand itself can be selectively deployed or disavowed, leading to much argumentation and confusion. Coleman provides an insider perspective on how actions are launched: there is debate, occasionally a rough consensus, and then activity, though several times individuals opt to begin an action, dragging along a number of other participants of varying degrees of reluctance. Tactics are formalized in an experimental, impromptu way. In this, I recognized the way actions formed in the Occupy encampments. Anonymous, as Coleman shows, was an early Occupy Wall Street booster, and her analysis highlights the connection between the Occupy form and the networked forms of sociality exemplified by Anonymous. After reading Coleman’s account, I am much more convinced of Anonymous’s importance to the movement. Likewise, many criticisms of Occupy could also be levelled at Anonymous; Coleman cites Jo Freeman’s “The Tyranny of Structurelessness” as one candidate.

    If Anonymous can be said to have a coherent political vision, it is one rooted in civil liberties, particularly freedom of speech and opposition censorship efforts. Indeed, Coleman earns the trust of several hackers by her affiliation with the Electronic Frontier Foundation, nominally the digital equivalent to the ACLU (though some object to this parallel, due in part to EFF’s strong ties to industry). Geek politics, from Anonymous to Wikileaks to the Pirate Bay, are a weaponized form of the mantra “information wants to be free.” Anonymous’s causes seem fit these concerns perfectly: Scientology’s litigious means of protecting its secrets provoked its wrath, as did the voluntary withdrawal of services to Wikileaks by PayPal and Mastercard, and the Bay Area Rapid Transit police’s blacking out of cell phone signals to scuttle a protest.

    I’ve referred to Anonymous as geeks rather than hackers deliberately. Hackers — skilled individuals who can break into protected systems — participate in Anonymous, but many of the Anons pulled from 4chan are merely pranksters with above-average knowledge of the Internet and computing. This gets the organization in quite a bit of trouble when it engages in the political tactic of most interest to Coleman, the distributed denial of service (DDoS) attack. A DDoS floods a website with requests, overwhelming its servers. This technique has captured the imaginations of a number of scholars, including Coleman, with its resemblance to offline direct action like pickets and occupations. However, the AnonOps organizers falsely claimed that their DDoS app, the Low-Orbit Ion Cannon, ensured user anonymity, leading to a number of Anons facing serious criminal charges. Coleman curiously places the blame for this startling breach of operational security on journalists writing about AnonOps, rather on the organizers themselves. Furthermore, many DDoS attacks, including those launched by Anonymous, have relied on botnets, which draw power from hundreds of hijacked computers, bears little resemblance to any kind of democratic initiative. Of course, this isn’t to say that the harsh punishments meted out to Anons under the auspices of the Computer Fraud and Abuse Act are warranted, but that political tactics must be subjected to scrutiny.

    Coleman argues that Anonymous outgrew its narrow civil libertarian agenda with its involvement in the Arab Spring: “No longer was the group bound to Internet-y issues like censorship and file-sharing” (148). However, by her own account, it is opposition to censorship which truly animates the group. The #OpTunisia manifesto (Anonymous names its actions with the prefix “Op,” for operations, along with the ubiquitous Twitter-based hashtag) states plainly, “Any organization involved in censorship will be targeted” (ibid). Anons were especially animated by the complete shut-off of the Internet in Tunisia and Egypt, actions which shattered the notion of the Internet as a space controlled by geeks, not governments. Anonymous operations launched against corporations did not oppose capitalist exploitation but fought corporate restrictions on online conduct. These are laudable goals, but also limited ones, and are often compatible with Silicon Valley companies, as illustrated by the Google-friendly anti-SOPA/PIPA protests.

    Coleman is eager to distance Anonymous from the libertarian philosophies rife in geek and hacker circles, but its politics are rarely incompatible with such a perspective. The most recent Guy Fawkes Day protest I witnessed in Washington, D.C., full of mask-wearing Anons, displayed a number of slogans emerging from the Ron Paul camp, “End the Fed” prominent among them. There is no accounting for this in HHWS. It is clear that political differences among Anons exists, and that any analysis must be nuanced. But Coleman’s description of this nuance ultimately doesn’t delineate the political positions within the group and how they coalesce, opting to elide these differences in favor of a more protean focus on “transgression.” In this way, she is able to provide a conceptual coherence for Anonymous, albeit at the expense of a detailed examination of the actual politics of its members. In the final analysis, “Anonymous became a generalized symbol for dissent, a medium to channel deep disenchantment… basically, with anything” (399).

    As political concerns overtake the lulz, Anonymous wavers as smaller militant hacker crews LulzSec and AntiSec take the fore, doxing white hat security executives, leaking documents, and defacing websites. This frustrates Coleman: “Anonymous had been exciting to me for a specific reason: it was the largest and most populist disruptive grassroots movement the Internet had, up to that time, fomented. But it felt, suddenly like AnonOps/Anonymous was slipping into a more familiar state of hacker-vanguardism” (302). Yet it is at this moment that Coleman offers a revealing account of hacker ideology: its alignment with the philosophy of Friedrich Nietzsche. From 4chan’s trolls scoffing at morality and decency, to hackers disregarding technical and legal restraints to accessing information, to the collective’s general rejection any standard form of accountability, Anonymous truly seems to posit itself as beyond good and evil. Coleman herself confesses to being “overtly romantic” as she supplies alibis for the group’s moral and strategic failures (it is, after all, incredibly difficult for an ethnographer to criticize her informants). But Nietzsche was a profoundly undemocratic thinker, whose avowed elitism should cast more of a disturbing shadow over the progressive potentials behind hacker groups than it does for Coleman, who embraces the ability of hackers to “cast off — at least momentarily — the shackles of normativity and attain greatness” (275). Coleman’s previous work on free software programmers convincingly makes the case for a Nietzschean current running through hacker culture; I am considerably more skeptical than she is about the liberal democratic viewpoint this engenders.

    Ultimately, Coleman concludes that Anonymous cannot work as a substitute for existing organizations, but that its tactics should be taken up by other political formations: “The urgent question is how to promote cross-pollination” between Anonymous and more formalized structures (374). This may be warranted, but there needs to be a fuller accounting of the drawbacks to Anonymous. Because anyone can fly its flag, and because its actions are guided by talented and charismatic individuals working in secret, Anonymous is ripe for infiltration. Historically, hackers have proven to be easy for law enforcement and corporations to co-opt, not the least because of the ferocious rivalries amongst hackers themselves. Tactics are also ambiguous. A DDoS can be used by anti-corporate activists, or by corporations against their rivals and enemies. Document dumps can ruin a diplomatic initiative, or a woman’s social life. Public square occupations can be used to advocate for democracy, or as a platform for anti-democratic coups. Currently, a lot of the same geek energy behind Anonymous has been devoted to the misogynist vendetta GamerGate (in a Reddit AMA, Coleman adopted a diplomatic tone, referring to GamerGate as “a damn Gordian knot”). Without a steady sense of Anonymous’s actual political commitments, outside of free speech, it is difficult to do much more than marvel at the novelty of their media presence (which wears thinner with each overwrought communique). With Hoaxer, Hacker, Whistleblower, Spy, Coleman has offered a readable account of recent hacker history, but I remain unconvinced of Anonymous’s political potential.

    _____

    Gavin Mueller (@gavinsaywhat) is a PhD candidate in cultural studies at George Mason University, and an editor at Jacobin and Viewpoint Magazine.

    Back to the essay

  • "The Black Jacobins and the Long Haitian Revolution" with Anthony Bogues

    "The Black Jacobins and the Long Haitian Revolution" with Anthony Bogues

    The Institute for the Humanities at the University of Illinois at Chicago (UIC) has uploaded a talk by b2 editor and contributor Anthony Bogues called “The Black Jacobins and the Long Haitian Revolution: Archives, Historiography, and the Writing of Revolution” and you can watch it below.