Author: boundary2

  • Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    Audrey Watters – The Best Way to Predict the Future is to Issue a Press Release

    By Audrey Watters

    ~

    This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology and the Media, Art, and Text PhD Program. The slides are also available here.

    Thank you very much for inviting me here to speak today. I’m particularly pleased to be speaking to those from Sociology and those from the English and those from the Media, Art, and Text departments, and I hope my talk can walk the line between and among disciplines and methods – or piss everyone off in equal measure. Either way.

    This is the last public talk I’ll deliver in 2016, and I confess I am relieved (I am exhausted!) as well as honored to be here. But when I finish this talk, my work for the year isn’t done. No rest for the wicked – ever, but particularly in the freelance economy.

    As I have done for the past six years, I will spend the rest of November and December publishing my review of what I deem the “Top Ed-Tech Trends” of the year. It’s an intense research project that usually tops out at about 75,000 words, written over the course of four to six weeks. I pick ten trends and themes in order to closely at the recent past, the near-term history of education technology. Because of the amount of information that is published about ed-tech – the amount of information, its irrelevance, its incoherence, its lack of context – it can be quite challenging to keep up with what is really happening in ed-tech. And just as importantly, what is not happening.

    So that’s what I try to do. And I’ll boast right here – no shame in that – no one else does as in-depth or thorough job as me, certainly no one who is entirely independent from venture capital, corporate or institutional backing, or philanthropic funding. (Of course, if you look for those education technology writers who are independent from venture capital, corporate or institutional backing, or philanthropic funding, there is pretty much only me.)

    The stories that I write about the “Top Ed-Tech Trends” are the antithesis of most articles you’ll see about education technology that invoke “top” and “trends.” For me, still framing my work that way – “top trends” – is a purposeful rhetorical move to shed light, to subvert, to offer a sly commentary of sorts on the shallowness of what passes as journalism, criticism, analysis. I’m not interested in making quickly thrown-together lists and bullet points. I’m not interested in publishing clickbait. I am interested nevertheless in the stories – shallow or sweeping – that we tell and spread about technology and education technology, about the future of education technology, about our technological future.

    Let me be clear, I am not a futurist – even though I’m often described as “ed-tech’s Cassandra.” The tagline of my website is “the history of the future of education,” and I’m much more interested in chronicling the predictions that others make, have made about the future of education than I am writing predictions of my own.

    One of my favorites: “Books will soon be obsolete in schools,” Thomas Edison said in 1913. Any day now. Any day now.

    Here are a couple of more recent predictions:

    “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.” – that’s Sebastian Thrun, best known perhaps for his work at Google on the self-driving car and as a co-founder of the MOOC (massive open online course) startup Udacity. The quotation is from 2012.

    And from 2013, by Harvard Business School professor, author of the book The Innovator’s Dilemma, and popularizer of the phrase “disruptive innovation,” Clayton Christensen: “In fifteen years from now, half of US universities may be in bankruptcy. In the end I’m excited to see that happen. So pray for Harvard Business School if you wouldn’t mind.”

    Pray for Harvard Business School. No. I don’t think so.

    Both of these predictions are fantasy. Nightmarish, yes. But fantasy. Fantasy about a future of education. It’s a powerful story, but not a prediction made based on data or modeling or quantitative research into the growing (or shrinking) higher education sector. Indeed, according to the latest statistics from the Department of Education – now granted, this is from the 2012–2013 academic year – there are 4726 degree-granting postsecondary institutions in the United States. A 46% increase since 1980. There are, according to another source (non-governmental and less reliable, I think), over 25,000 universities in the world. This number is increasing year-over-year as well. So to predict that the vast vast majority of these schools (save Harvard, of course) will go away in the next decade or so or that they’ll be bankrupt or replaced by Silicon Valley’s version of online training is simply wishful thinking – dangerous, wishful thinking from two prominent figures who will benefit greatly if this particular fantasy comes true (and not just because they’ll get to claim that they predicted this future).

    Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

    Fantasy. Fortune-telling. Or as capitalism prefers to call it “market research.”

    “Market research” involves fantastic stories of future markets. These predictions are often accompanied with a press release touting the size that this or that market will soon grow to – how many billions of dollars schools will spend on computers by 2020, how many billions of dollars of virtual reality gear schools will buy by 2025, how many billions of dollars of schools will spend on robot tutors by 2030, how many billions of dollars will companies spend on online training by 2035, how big will coding bootcamp market will be by 2040, and so on. The markets, according to the press releases, are always growing. Fantasy.

    In 2011, the analyst firm Gartner predicted that annual tablet shipments would exceed 300 million units by 2015. Half of those, the firm said, would be iPads. IDC estimates that the total number of shipments in 2015 was actually around 207 million units. Apple sold just 50 million iPads. That’s not even the best worst Gartner prediction. In October of 2006, Gartner said that Apple’s “best bet for long-term success is to quit the hardware business and license the Mac to Dell.” Less than three months later, Apple introduced the iPhone. The very next day, Apple shares hit $97.80, an all-time high for the company. By 2012 – yes, thanks to its hardware business – Apple’s stock had risen to the point that the company was worth a record-breaking $624 billion.

    But somehow, folks – including many, many in education and education technology – still pay attention to Gartner. They still pay Gartner a lot of money for consulting and forecasting services.

    People find comfort in these predictions, in these fantasies. Why?

    Gartner is perhaps best known for its “Hype Cycle,” a proprietary graphic presentation that claims to show how emerging technologies will be adopted.

    According to Gartner, technologies go through five stages: first, there is a “technology trigger.” As the new technology emerges, a lot of attention is paid to it in the press. Eventually it reaches the second stage: the “peak of inflated expectations.” So many promises have been made about this technological breakthrough. Then, the third stage: the “trough of disillusionment.” Interest wanes. Experiments fail. Promises are broken. As the technology matures, the hype picks up again, more slowly – this is the “slope of enlightenment.” Eventually the new technology becomes mainstream – the “plateau of productivity.”

    It’s not that hard to identify significant problems with the Hype Cycle, least of which being it’s not a cycle. It’s a curve. It’s not a particularly scientific model. It demands that technologies always move forward along it.

    Gartner says its methodology is proprietary – which is code for “hidden from scrutiny.” Gartner says, rather vaguely, that it relies on scenarios and surveys and pattern recognition to place technologies on the line. But most of the time when Gartner uses the word “methodology,” it is trying to signify “science,” and what it really means is “expensive reports you should buy to help you make better business decisions.”

    Can it really help you make better business decisions? It’s just a curve with some technologies plotted along it. The Hype Cycle doesn’t help explain why technologies move from one stage to another. It doesn’t account for technological precursors – new technologies rarely appear out of nowhere – or political or social changes that might prompt or preclude adoption. And at the end it is simply too optimistic, unreasonably so, I’d argue. No matter how dumb or useless a new technology is, according to the Hype Cycle at least, it will eventually become widely adopted. Where would you plot the Segway, for example? (In 2008, ever hopeful, Gartner insisted that “This thing certainly isn’t dead and maybe it will yet blossom.” Maybe it will, Gartner. Maybe it will.)

    And maybe this gets to the heart as to why I’m not a futurist. I don’t share this belief in an increasingly technological future; I don’t believe that more technology means the world gets “more better.” I don’t believe that more technology means that education gets “more better.”

    Every year since 2004, the New Media Consortium, a non-profit organization that advocates for new media and new technologies in education, has issued its own forecasting report, the Horizon Report, naming a handful of technologies that, as the name suggests, it contends are “on the horizon.”

    Unlike Gartner, the New Media Consortium is fairly transparent about how this process works. The organization invites various “experts” to participate in the advisory board that, throughout the course of each year, works on assembling its list of emerging technologies. The process relies on the Delphi method, whittling down a long list of trends and technologies by a process of ranking and voting until six key trends, six emerging technologies remain.

    Disclosure/disclaimer: I am a folklorist by training. The last time I took a class on “methods” was, like, 1998. And admittedly I never learned about the Delphi method – what the New Media Consortium uses for this research project – until I became a scholar of education technology looking into the Horizon Report. As a folklorist, of course, I did catch the reference to the Oracle of Delphi.

    Like so much of computer technology, the roots of the Delphi method are in the military, developed during the Cold War to forecast technological developments that the military might use and that the military might have to respond to. The military wanted better predictive capabilities. But – and here’s the catch – it wanted to identify technology trends without being caught up in theory. It wanted to identify technology trends without developing models. How do you do that? You gather experts. You get those experts to consensus.

    So here is the consensus from the past twelve years of the Horizon Report for higher education. These are the technologies it has identified that are between one and five years from mainstream adoption:

    It’s pretty easy, as with the Gartner Hype Cycle, to look at these predictions and note that they are almost all wrong in some way or another.

    Some are wrong because, say, the timeline is a bit off. The Horizon Report said in 2010 that “open content” was less than a year away from widespread adoption. I think we’re still inching towards that goal – admittedly “open textbooks” have seen a big push at the federal and at some state levels in the last year or so.

    Some of these predictions are just plain wrong. Virtual worlds in 2007, for example.

    And some are wrong because, to borrow a phrase from the theoretical physicist Wolfgang Pauli, they’re “not even wrong.” Take “collaborative learning,” for example, which this year’s K–12 report posits as a mid-term trend. Like, how would you argue against “collaborative learning” as occurring – now or some day – in classrooms? As a prediction about the future, it is not even wrong.

    But wrong or right – that’s not really the problem. Or rather, it’s not the only problem even if it is the easiest critique to make. I’m not terribly concerned about the accuracy of the predictions about the future of education technology that the Horizon Report has made over the last decade. But I do wonder how these stories influence decision-making across campuses.

    What might these predictions – this history of the future – tell us about the wishful thinking surrounding education technology and about the direction that the people the New Media Consortium views as “experts” want the future to take. What can we learn about the future by looking at the history of our imagining about education’s future. What role does powerful ed-tech storytelling (also known as marketing) play in shaping that future? Because remember: to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

    It’s both convenient and troubling then these forward-looking reports act as though they have no history of their own; they purposefully minimize or erase their own past. Each year – and I think this is what irks me most – the NMC fails to looks back at what it had predicted just the year before. It never revisits older predictions. It never mentions that they even exist. Gartner too removes technologies from the Hype Cycle each year with no explanation for what happened, no explanation as to why trends suddenly appear and disappear and reappear. These reports only look forward, with no history to ground their direction in.

    I understand why these sorts of reports exist, I do. I recognize that they are rhetorically useful to certain people in certain positions making certain claims about “what to do” in the future. You can write in a proposal that, “According to Gartner… blah blah blah.” Or “The Horizon Reports indicates that this is one of the most important trends in coming years, and that is why we need to commit significant resources – money and staff – to this initiative.” But then, let’s be honest, these reports aren’t about forecasting a future. They’re about justifying expenditures.

    “The best way to predict the future is to invent it,” computer scientist Alan Kay once famously said. I’d wager that the easiest way is just to make stuff up and issue a press release. I mean, really. You don’t even need the pretense of a methodology. Nobody is going to remember what you predicted. Nobody is going to remember if your prediction was right or wrong. Nobody – certainly not the technology press, which is often painfully unaware of any history, near-term or long ago – is going to call you to task. This is particularly true if you make your prediction vague – like “within our lifetime” – or set your target date just far enough in the future – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Let’s consider: is there something about the field of computer science in particular – and its ideological underpinnings – that makes it more prone to encourage, embrace, espouse these sorts of predictions? Is there something about Americans’ faith in science and technology, about our belief in technological progress as a signal of socio-economic or political progress, that makes us more susceptible to take these predictions at face value? Is there something about our fears and uncertainties – and not just now, days before this Presidential Election where we are obsessed with polls, refreshing Nate Silver’s website obsessively – that makes us prone to seek comfort, reassurance, certainty from those who can claim that they know what the future will hold?

    “Software is eating the world,” investor Marc Andreessen pronounced in a Wall Street Journal op-ed in 2011. “Over the next 10 years,” he wrote, “I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” Buy stock in technology companies was really the underlying message of Andreessen’s op-ed; this isn’t another tech bubble, he wanted to reinsure investors. But many in Silicon Valley have interpreted this pronouncement – “software is eating the world” – as an affirmation and an inevitability. I hear it repeated all the time – “software is eating the world” – as though, once again, repeating things makes them true or makes them profound.

    If we believe that, indeed, “software is eating the world,” that we are living in a moment of extraordinary technological change, that we must – according to Gartner or the Horizon Report – be ever-vigilant about emerging technologies, that these technologies are contributing to uncertainty, to disruption, then it seems likely that we will demand a change in turn to our educational institutions (to lots of institutions, but let’s just focus on education). This is why this sort of forecasting is so important for us to scrutinize – to do so quantitatively and qualitatively, to look at methods and at theory, to ask who’s telling the story and who’s spreading the story, to listen for counter-narratives.

    This technological change, according to some of the most popular stories, is happening faster than ever before. It is creating an unprecedented explosion in the production of information. New information technologies, so we’re told, must therefore change how we learn – change what we need to know, how we know, how we create and share knowledge. Because of the pace of change and the scale of change and the locus of change (that is, “Silicon Valley” not “The Ivory Tower”) – again, so we’re told – our institutions, our public institutions can no longer keep up. These institutions will soon be outmoded, irrelevant. Again – “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    These forecasting reports, these predictions about the future make themselves necessary through this powerful refrain, insisting that technological change is creating so much uncertainty that decision-makers need to be ever vigilant, ever attentive to new products.

    As Neil Postman and others have cautioned us, technologies tend to become mythic – unassailable, God-given, natural, irrefutable, absolute. So it is predicted. So it is written. Techno-scripture, to which we hand over a certain level of control – to the technologies themselves, sure, but just as importantly to the industries and the ideologies behind them. Take, for example, the founding editor of the technology trade magazine Wired, Kevin Kelly. His 2010 book was called What Technology Wants, as though technology is a living being with desires and drives; the title of his 2016 book, The Inevitable. We humans, in this framework, have no choice. The future – a certain flavor of technological future – is pre-ordained. Inevitable.

    I’ll repeat: I am not a futurist. I don’t make predictions. But I can look at the past and at the present in order to dissect stories about the future.

    So is the pace of technological change accelerating? Is society adopting technologies faster than it’s ever done before? Perhaps it feels like it. It certainly makes for a good headline, a good stump speech, a good keynote, a good marketing claim, a good myth. But the claim starts to fall apart under scrutiny.

    This graph comes from an article in the online publication Vox that includes a couple of those darling made-to-go-viral videos of young children using “old” technologies like rotary phones and portable cassette players – highly clickable, highly sharable stuff. The visual argument in the graph: the number of years it takes for one quarter of the US population to adopt a new technology has been shrinking with each new innovation.

    But the data is flawed. Some of the dates given for these inventions are questionable at best, if not outright inaccurate. If nothing else, it’s not so easy to pinpoint the exact moment, the exact year when a new technology came into being. There often are competing claims as to who invented a technology and when, for example, and there are early prototypes that may or may not “count.” James Clerk Maxwell did publish A Treatise on Electricity and Magnetism in 1873. Alexander Graham Bell made his famous telephone call to his assistant in 1876. Guglielmo Marconi did file his patent for radio in 1897. John Logie Baird demonstrated a working television system in 1926. The MITS Altair 8800, an early personal computer that came as a kit you had to assemble, was released in 1975. But Martin Cooper, a Motorola exec, made the first mobile telephone call in 1973, not 1983. And the Internet? The first ARPANET link was established between UCLA and the Stanford Research Institute in 1969. The Internet was not invented in 1991.

    So we can reorganize the bar graph. But it’s still got problems.

    The Internet did become more privatized, more commercialized around that date – 1991 – and thanks to companies like AOL, a version of it became more accessible to more people. But if you’re looking at when technologies became accessible to people, you can’t use 1873 as your date for electricity, you can’t use 1876 as your year for the telephone, and you can’t use 1926 as your year for the television. It took years for the infrastructure of electricity and telephony to be built, for access to become widespread; and subsequent technologies, let’s remember, have simply piggy-backed on these existing networks. Our Internet service providers today are likely telephone and TV companies; our houses are already wired for new WiFi-enabled products and predictions.

    Economic historians who are interested in these sorts of comparisons of technologies and their effects typically set the threshold at 50% – that is, how long does it take after a technology is commercialized (not simply “invented”) for half the population to adopt it. This way, you’re not only looking at the economic behaviors of the wealthy, the early-adopters, the city-dwellers, and so on (but to be clear, you are still looking at a particular demographic – the privileged half.)

    And that changes the graph again:

    How many years do you think it’ll be before half of US households have a smart watch? A drone? A 3D printer? Virtual reality goggles? A self-driving car? Will they? Will it be fewer years than 9? I mean, it would have to be if, indeed, “technology” is speeding up and we are adopting new technologies faster than ever before.

    Some of us might adopt technology products quickly, to be sure. Some of us might eagerly buy every new Apple gadget that’s released. But we can’t claim that the pace of technological change is speeding up just because we personally go out and buy a new iPhone every time Apple tells us the old model is obsolete. Removing the headphone jack from the latest iPhone does not mean “technology changing faster than ever,” nor does showing how headphones have changed since the 1970s. None of this is really a reflection of the pace of change; it’s a reflection of our disposable income and a ideology of obsolescence.

    Some economic historians like Robert J. Gordon actually contend that we’re not in a period of great technological innovation at all; instead, we find ourselves in a period of technological stagnation. The changes brought about by the development of information technologies in the last 40 years or so pale in comparison, Gordon argues (and this is from his recent book The Rise and Fall of American Growth: The US Standard of Living Since the Civil War), to those “great inventions” that powered massive economic growth and tremendous social change in the period from 1870 to 1970 – namely electricity, sanitation, chemicals and pharmaceuticals, the internal combustion engine, and mass communication. But that doesn’t jibe with “software is eating the world,” does it?

    Let’s return briefly to those Horizon Report predictions again. They certainly reflect this belief that technology must be speeding up. Every year, there’s something new. There has to be. That’s the purpose of the report. The horizon is always “out there,” off in the distance.

    But if you squint, you can see each year’s report also reflects a decided lack of technological change. Every year, something is repeated – perhaps rephrased. And look at the predictions about mobile computing:

    • 2006 – the phones in their pockets
    • 2007 – the phones in their pockets
    • 2008 – oh crap, we don’t have enough bandwidth for the phones in their pockets
    • 2009 – the phones in their pockets
    • 2010 – the phones in their pockets
    • 2011 – the phones in their pockets
    • 2012 – the phones too big for their pockets
    • 2013 – the apps on the phones too big for their pockets
    • 2015 – the phones in their pockets
    • 2016 – the phones in their pockets

    This hardly makes the case for technological speeding up, for technology changing faster than it’s ever changed before. But that’s the story that people tell nevertheless. Why?

    I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

    But I also pay attention to these sorts of stories because there’s that part of me that is horrified at the stuff – predictions – that people pass off as true or as inevitable.

    “65% of today’s students will be employed in jobs that don’t exist yet.” I hear this statistic cited all the time. And it’s important, rhetorically, that it’s a statistic – that gives the appearance of being scientific. Why 65%? Why not 72% or 53%? How could we even know such a thing? Some people cite this as a figure from the Department of Labor. It is not. I can’t find its origin – but it must be true: a futurist said it in a keynote, and the video was posted to the Internet.

    The statistic is particularly amusing when quoted alongside one of the many predictions we’ve been inundated with lately about the coming automation of work. In 2014, The Economist asserted that “nearly half of American jobs could be automated in a decade or two.”“Before the end of this century,” Wired Magazine’s Kevin Kelly announced earlier this year, “70 percent of today’s occupations will be replaced by automation.”

    Therefore the task for schools – and I hope you can start to see where these different predictions start to converge – is to prepare students for a highly technological future, a future that has been almost entirely severed from the systems and processes and practices and institutions of the past. And if schools cannot conform to this particular future, then “In fifty years, there will be only ten institutions in the world delivering higher education and Udacity has a shot at being one of them.”

    Now, I don’t believe that there’s anything inevitable about the future. I don’t believe that Moore’s Law – that the number of transistors on an integrated circuit doubles every two years and therefore computers are always exponentially smaller and faster – is actually a law. I don’t believe that robots will take, let alone need take, all our jobs. I don’t believe that YouTube has been rendered school irrevocably out-of-date. I don’t believe that technologies are changing so quickly that we should hand over our institutions to entrepreneurs, privatize our public sphere for techno-plutocrats.

    I don’t believe that we should cheer Elon Musk’s plans to abandon this planet and colonize Mars – he’s predicted he’ll do so by 2026. I believe we stay and we fight. I believe we need to recognize this as an ego-driven escapist evangelism.

    I believe we need to recognize that predicting the future is a form of evangelism as well. Sure gets couched in terms of science, it is underwritten by global capitalism. But it’s a story – a story that then takes on these mythic proportions, insisting that it is unassailable, unverifiable, but true.

    The best way to invent the future is to issue a press release. The best way to resist this future is to recognize that, once you poke at the methodology and the ideology that underpins it, a press release is all that it is.

    Image credits: 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28. And a special thanks to Tressie McMillan Cottom and David Golumbia for organizing this talk. And to Mike Caulfield for always helping me hash out these ideas.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Video Essay: All That Is Solid Melts Into Data

    Video Essay: All That Is Solid Melts Into Data

    dir. Ryan S. Jeffery and Boaz Levin

    This film is posted in anticipation of boundary 2‘s upcoming special issue –– Bernard Stiegler: Amateur Philosophy (January 2017).

    Equal parts building and machine, a library and a public utility, data centers are the unwitting monuments of knowledge production to the digital turn. This film traces the historical evolution of these structures that make-up “the cloud”, the physical repositories for the exponentially growing amount of human activity and communication taking form as digital data. While our “smart tools” and devices for communication become increasingly smaller, thinner, and sleeker, the digital sphere they require grows larger demanding an ever-growing physical infrastructure, effecting and shaping our physical landscape. This film looks to the often-overlooked materiality of networked technologies in order to elucidate their social, environmental, and economic impact, and calls into question the structures of power that have developed out of the technologies of global computation.

  • Zachary Samalin: Genealogies of Self-Accusation

    Zachary Samalin: Genealogies of Self-Accusation

    by Zachary Samalin

    Response to Bruce Robbins: On the Non-representation of Atrocity

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    In his V21 symposium keynote lecture, “Atrocity in the Novel, Atrocity in History,” Bruce Robbins asks whether it is reasonable or instead “unacceptably presentist” to “expect the great epoch of European realism to ‘do’ atrocity in the particular, self-accusing sense” he is interested in examining, in which “‘we’ accuse ourselves of doing something outrageously cruel, collective, and indiscriminate to ‘others.’” “Arguably,” Robbins continues, “such representations only became possible after European civilization had been shocked out of its pre-Copernican complacency by the Holocaust and the rise of anti-colonial movements. In the nineteenth century, those shocks were still to come” (Robbins 2016: 4-5). Perhaps not surprisingly in a room full of Victorian literature specialists, the response to Robbins’ lecture during the question and answer session produced a long list of 19th century works that audience members thought would complicate, enrich, trouble or outright repudiate Robbins’ hypothesis that the literature of the 19th century had yet to achieve a certain form of critical self-consciousness, and so was incapable of indicting political brutality and violence. To the contrary, this audience response seemed to suggest, the archive of 19th century literature is rife with examples of just what Robbins is looking for.

    In the following response to Robbins’ lecture, I want to theorize more specifically the tension between these two seemingly irreconcilable positions, by examining one of Robbins’ central theses about the entwinement of politics and aesthetics—namely, that literature can and perhaps ought to lay claim to a privileged role in the articulation of “civilizational self-accusation,” especially in the context of the atrocities of modern imperialism. The notion that the literary has the capacity to register unwanted self-implication in destructive sociopolitical processes is extremely compelling; but, unlike Robbins, it is also an aesthetic innovation that I have come to associate with various currents in 19th century literature. And yet, as half a century of postcolonial literature and theory has helped us to see, this sophisticated innovation, which allowed for the registration, in narrative form, of undesired conditions of immanence, did little to turn the critical gaze of the 19th century novel outwards, that is, towards the ongoing atrocity of the British empire. When we read the literature of the mid- to late-19th century—Little Dorrit (1857), Notes from Underground (1864), The Belly of Paris (1873)—we don’t find a journalistic subjectivity reporting on the turbulent decades of perpetual war in Algeria, Persia, the Crimea, India, Burma, Vietnam, and China; but we do encounter a complex structure of feeling, beginning to emerge as something articulable, that conceived of modernity as a process of regressive self-destruction and of civilization as something unwanted that would soon sour itself from the inside out. In this respect, the question that Robbins’ lecture raises is to my mind not whether it is too ‘presentist’ to expect Flaubert or Dickens to have offered a critique of atrocity, but rather the enduring, perhaps more disturbing question of what specific forms of ideological blindness kept the novel form from extending the implications of its own socially critical and ethico-political insights to the imperial context?

    The first point to make is that, when it came to its atrocities, 19th century Britain left behind an indisputably immense non-literary paper trail. Certain brutal events in the maintenance of the empire—such as the violent responses to the Morant Bay rebellion (1865) and the Indian revolt (1857-8)—were not only voluminously documented, but debated publicly and at length, and did much to bring to the fore the question of what it means to participate in a putatively modern and morally enlightened national culture. More often than not, as has been well established, such debates served to mask the violence intrinsic to imperialism and capitalism, focusing instead on the extent to which particular episodes of brutality and exploitation represented local failures and setbacks in the ongoing civilizing project of the British Empire. Thus while Governor Eyre came under fire in the aftermath of Morant Bay, the terms of public debate set by the Jamaica Committee did little to overturn the entrenched patterns of racist thought and economic opportunism which helped to prop up the central premises of imperial exploitation (see Holt 1992: 278-312). Like a good deal of the public and official reaction to the documentation of torture at Abu Ghraib prison in our own day, Morant Bay provided a space for a limited articulation of civilizational self-accusation in British public discourse—‘we don’t do that’—but only within a larger self-serving framework of disidentification, disavowal and civilizational (which is to say racial and cultural) arrogance that helped keep the inherent injustice of imperial occupation from taking center stage. Indeed, one limitation of framing critique in reference to specific atrocities made apparent through these examples is that the focus on the event of cruelty and violence runs the risk of obscuring patterns of ongoing or systemic exploitation.

    Yet in their most trenchant form, 19th century critiques of imperialist violence did approach the form of self-critique that Robbins holds up as a more modern ideal. Marx’s criticism of the 1855 Report of the Commissioners for the Investigation of Alleged Cases of Torture in the Madras Presidency is exemplary in this respect (see Rao 2001). The report sought to establish the prevalence of physical torture and brutality as a systemic means of extracting tax revenue within British India for the profit of the East India Company, only to disavow responsibility for that violence and to condemn it, with characteristic outrage and condescension, in the racialized language of barbarism. “Our aim,” the report concludes, “is to guard the Natives against themselves” (Report 1855: 70). As Marx summarized the report, “The universal existence of torture as a financial institution of British India is thus officially admitted, but the admission is made in such a manner as to shield the British Government itself” (Marx [1857]1975: 66). Yet as Marx goes on to observe, “a few extracts from the evidence on which the Madras Report professes to be founded, will suffice to refute its assertion that ‘no blame is due to Englishmen,’” and to document instead the systematically exploitative nature of capitalist imperialism. Far from evidencing the need for colonial paternalism, Marx thought the report ought to raise for the “dispassionate and thoughtful men” of Europe the more self-implicating question of “whether a people are not justified in attempting to expel the foreign conquerors who have so abused their subjects” (Marx 1975: 69). Marx’s indictment of the Madras Report may not be precisely what Robbins has in mind when he argues for the cosmopolitan modernity of civilizational self-accusation as a “very special subset of atrocity-response in which ‘we’ accuse ourselves of doing something outrageously cruel, collective, and indiscriminate to ‘others’” (Robbins 2016: 2)—but if not, it is certainly a close relative.

    While Marx’s writings on India often lapse into a more rigidly developmental-teleological mode, according to which capitalism represents the first step necessary for Asian civilizations to catch up with world history, his observations about the Madras Report do more to highlight the complex ways that the question of identification came in this period to animate the representational dynamic of critique. The difference between the critical language of civilizational self-accusation, as Robbins formulates it, and the exculpatory language of civilizational disavowal, as exemplified by the Madras Report, hinges precisely on such vectors of identification—that is, on a speaker’s imagined participation in a particular ideological community. In this respect, while Robbins observes that “the modern weakening of membership” is a prerequisite for the distance needed to understand atrocity as such, I would argue that the unwanted (but inescapable) identification with destructive processes is in fact the crucial psychosocial component he ought to pursue, rather than the fraying of communal bonds more customarily associated with the onset of modernity (Robbins 2016: 1). Due in large part to a post-Enlightenment legacy that idealizes disinterestedness and objective distance, we have yet to provide even the basic outline of a history for this capacity for unwanted identification.

    Understanding how these two opposite movements—towards a desirable disinterest and an undesired involvement—were fused to one another throughout the 19th century is a significant and unfinished task for scholars of the period, in the first place because their fusion accounts for the antithetical attachments to the impulse to document violence and atrocity that I have been describing. The imperialist impulse to represent violence in order to disavow it as something always perpetrated by an other, or to frame it as an exceptionality that justifies rule, cannot be fully distinguished from the self-implicating impulse to expose that violence as immanent to modernity. This is in part because they share the same language, as reflected by Marx’s insistence that blue books are the only evidence of systemic violence one needs. Though we often think of Marxist thought as working to fill in the gaps in the official discourse, I am suggesting instead that we attend to what Marx presupposes is the radical transparency of the language of domination—the presupposition that violence and exploitation had become self-evident, and were written brazenly on the surface of things in the language of the perpetrators. We might therefore take Robbins’ call to place the writing of atrocity within a longue durée of moral development as an invitation to theorize this intersection of the genealogy of self-accusation and unwanted identification with the historical transformations which allowed atrocity to be written legibly and out in the open, rather than hidden or buried in secret.

    At the same time that we see extensive evidence of such a complex public discourse for engaging atrocity in 19th century Britain, we also know that in different national and cultural contexts, literary and artistic production began to develop a wide array of aesthetic strategies for representing atrocity throughout the 19th century while simultaneously problematizing the presumed security of the disinterested observer. Goya’s Disasters of War come to mind, as does the archive of 19th century photographs that Nathan Hensley and Zahid Chaudhary have recently written about; indeed Hensley has helped us to see precisely how these hermeneutic questions about the representation of violence and its implied spectators remain unanswered in the aftermath of empire (see Chaudhury 2012; Hensley 2013). Similarly, slave narrative and abolitionist literature in the United States—which of course tended not to focus only on specific atrocities but on the systemic and juridical nature of slavery under capitalism—bear directly on Robbins’ claims about the 19th century’s representational capacity for moral indictment. However, I present these not so much as counter-examples, but rather as indices of the more particular absence that Robbins has helped us to identify. We know that British imperial atrocities were voluminously documented and often publicly debated as potentially undermining the civilizational project; and we know that the 19th century saw the development of a more radical social scientific and socially critical discourse of self-accusation, that sprouted up out of an official discourse of disavowal; and, finally, we know as well that other aesthetic traditions in other cultural contexts have done a better job than the British novel at representing atrocities through some form of self-accusation or communal indictment.

    So then one question: What to call this kind of ideological absence or moral-aesthetic caesura? How does it work, and how can we grasp its psychosocial dynamics? I put the question this way, since we have previously relied on the vocabulary of symptom and repression to elaborate precisely these absences. And yet it seems clear, today, as it has for some time, that the tools afforded by the vocabulary of cultural neurosis don’t quite satisfy here, given that we are not dealing with an occluded or concealed discourse of atrocity that “returns” from its repression in the interstices of the literary text, but rather with the more disjointed, more deranged fact that this proliferate and public discourse did not find its fullest expression in the exemplary aesthetic form of the period, that is, in the novel. Why not? My sense is that we still need to sharpen and refine our historical account of the ways in which representation functions vis-à-vis the intolerable, the unwanted, the atrocious, and the unrepresentable—a newly sharpened account of the writing of the disaster that takes into account the different species of blindness and specific patterns of resistance endemic to modern literary forms.

    These caesuras in the political consciousness of the Victorian novel become all the more jarring when we consider that, over the 19th century, literary texts, and perhaps the novel in particular, emerged as the cultural laboratory for testing out Enlightenment ideals and for exposing them as violent or vacuous, as cruelty in themselves—whether in the name of reactionary sentiment or liberalizing social critique or some impulses more nihilistic than either of those. I am thinking of earlier works like Juliette and Gulliver’s Travels just as much as later, increasingly socially engaged texts such as Our Mutual Friend, La Terre, Notes from Underground and Jude the Obscure. Considered from this angle, the literary domain in the 19th century was a sophisticated and complex arena for elaborating a deeply affective experience of unwanted self-implication and inevitable participation in a destructive order, founded on tenuous, inverted values.

    Even if the 19th century did not “possess a public capable of demanding or enforcing scrutiny of ourselves from outside” (Robbins 2016: 24), it is clear to my mind that later authors as diverse as Achebe, Vallejo and Sebald returned to this more nihilistic 19th century conception of literature as a privileged space for giving voice to an unwanted relation of immanence in the destructive processes of modernity. Indeed, the outraged self-accusation Robbins describes, in order to transcend mere bad faith or ressentiment, needs to involve a more disturbing set of identifications than simply seeing oneself as though from without. A literary genealogy of civilizational self-accusation, then, might follow unpredictable lines back through unexpected pages, from the mushroom clouds of the 20th century Robbins begins with to the storm-clouds of the 19th. How can we further specify and describe this negative structure of feeling in the novel, give it a longer history that doesn’t stop and start according to the arbitrary constraints of post-hoc periodization, and which attends to its ever-shifting blind spots and its insights alike?

    References

    Chaudhury, Zahid. 2012. Afterimage of Empire: Photography in Nineteenth Century India. Princeton, NJ: Princeton University Press.

    Hensley, Nathan. 2013. “Curatorial Reading and Endless War.” Victorian Studies 56, no.1: 59-83.

    Holt, Tom. 1992. The Problem of Freedom. Baltimore: Johns Hopkins University Press.

    Marx, Karl. (1857) 1975. “Investigations of Tortures in India.” Reprinted in Marx, The First Indian War of Independence, 1857-1859. Moscow: Progress Publishers.

    Rao, Anupama. 2001. “Problems of Violence, States of Terror: Torture in Colonial India.” Interventions 3, no. 2:186-205

    Report of the Commissioners for the Investigation of Alleged Cases of Torture in the Madras Presidency. 1855. Madras: Fort St. George Gazette Press.

    Robbins, Bruce. “Atrocity as Self-Accusation.” 2016.

     

    CONTRIBUTOR’S NOTE

    Zachary Samalin is Assistant Professor of English at the University of Chicago.  He is currently working on a manuscript, The Masses Are Revolting: Victorian Culture and the Aesthetics of Disgust.

  • Bruce Robbins: On the Non-representation of Atrocity

    Bruce Robbins: On the Non-representation of Atrocity

    by Bruce Robbins

    The closing day featured a formal keynote address by Bruce Robbins, followed by responses.  While the keynote practices a rousing, engaged, presentist, theoretical Victorian studies, the responses by Zach Samalin and Molly Clark Hillard, and the heated discussions at the symposium, point to other futures.  Elaine Hadley integrated a number of the arcs of discussion while also highlighting what remains to be argued.  We are grateful to b2o for providing this catalyst for yet more.    

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    Toward the end of Michael Ondaatje’s novel The English Patient (1992), the young Canadian ex-nurse Hana writes in a letter home to her stepmother: “From now on, I believe the personal will forever be at war with the public” (Ondaatje 1992, 292).

    Hana has just heard about the bombing of Hiroshima and Nagasaki, news that has shocked her Sikh lover Kip into leaving both her and the anti-Nazi war effort.  The unending war between the public and the personal that Hana dates “from now on” is the result of what we have come to call an atrocity: an act of extreme cruelty that is collective, unnecessary, and indiscriminate, the latter two adjectives judged to apply because (here I quote Jacques Sémelin’s definition of “massacre” in his book Purify and Destroy) it is “aimed at destroying non-combatants” (Sémelin 2007: 4). I will withhold comment for now on whether the war between the public and the personal (which echoes a vocabulary put in play a few years earlier by Fredric Jameson) is as new as Hana thinks; it sounds pretty Victorian to me.  But the atom bomb was definitely new.  And as a concept, the atrocity is also pretty new.  The idea of the “non-combatant” dates only from the Napoleonic Wars.  Both “non-combatant” and “atrocity” would seem to require the modern weakening of membership–the still recent assumption that individuals should not be held responsible for actions taken by the families or nations to which they belong.  “Cruel” and “fierce,” the meanings of “atrox,” the Latin source word for “atrocity,” did not begin their lives as pejoratives, but picked up pejorative meanings only as physical violence came to seem a less dependable aspect of ordinary lives, something that generally could and should be avoided.  The re-classification of violence as out of the ordinary is again associated, perhaps only wishfully, with modernity.

    But you only feel how very modern Ondaatje’s naming of the atom bomb as atrocity is when you add one more element.  Kip and Hana are recoiling from an action performed by their own side.  This is a moment of civilizational self-accusation.  It belongs to the very special subset of atrocity-response in which “we” accuse ourselves of doing something outrageously cruel, collective, and indiscriminate to “others.”

    Yes, Ondaatje is a Canadian and a Sri Lankan; Kurt Vonnegut’s Slaughterhouse Five might have been a better as well as an earlier example.  And yes, to play up the look-at-us-admitting-the terrible-things-we-did-to-others criterion, as I’m preparing to do, could be seen as a celebratory re-write of Enlightenment self-scrutiny, in other words as a way of once again giving credit to the modern West for a virtue on which it has often prided itself, perhaps excessively.  Undeterred by these objections, I am going to forge ahead, assigning atrocity as self-accusation an important part in the long-term moral history of humankind and indicating a desire, at least, to place the novel within that larger history.  This of course assumes there exists such a thing as the long-term moral history of humankind.  It assumes that history need not be understood as a finer and finer discrimination of differences (a habit that I think the V21 group has very usefully expressed its impatience with) but can also be thought of as a series of experiments in the synthesis of differences—bold generalities, even “grand narratives.”

    It’s from the perspective of the long-term moral history of humankind that the question of atrocity is most interesting, and most humbling, for specialists in nineteenth-century British literature.  In the late 1970s, the editors of the journal New Left Review conducted a book-length series of interviews with Raymond Williams.  The interview that hit me hardest at the time dealt with Williams’ admiration for the novels of the 1840s, about which I had just heard him lecture.[i]  “In that decade,” the interviewers say,

    there occurred a cataclysmic event, far more dramatic than anything that happened in England, a very short geographical distance away, whose consequences were directly governed by the established order of the English state.  That was of course the famine in Ireland—a disaster without comparison in Europe.  Yet if we consult the two maps of either the official ideology of the period or the recorded subjective experience of its novels, neither of them extended to include this catastrophe right on their doorstep, causally connected to socio-political processes in England. (Williams 1981, 170)

    If this is true for catastrophic events in Europe, how much more true is it, the interviewers ask, for more distant colonies like India, where events were again directly affected by the imperial system?

    The NLR interviewers are asking us to imagine that even the English literature of the 1840s we most admire today was unable to represent disasters or cataclysmic events for which England was itself responsible, directly or indirectly.  It does not seem implausible that atrocity-representation in the narrow, self-accusatory sense might simply be missing from the history of the 19th century novel.  If you think of its greatest works, direct representations of any atrocity are certainly not the first things that come to mind.  We know our authors could express horror at the 1857 Mutiny in India or the Bulgarian Atrocities (committed by the Ottomans) or King Leopold’s mischief in the Congo or the occasional scene of mob violence.  But perhaps they simply could not summon up any English equivalent to Vonnegut’s horror at the Allied bombing of Dresden.  Perhaps the English could not imagine accusing themselves, at least not from the viewpoint of the non-English, at least not when the accusation would have been damning.  Were we to accept this hypothesis, which I offer up here as nothing more than a hypothesis, it seems clear that some of the going rationales for nineteenth-century studies, and maybe even for literary criticism in general, would be in jeopardy.

    In self-defense, we could of course argue that the criterion of self-accusation is unacceptably presentist. How could one expect the great epoch of European realism to “do” atrocity in the particular, self-accusing sense? Arguably such representations only became possible after European civilization has been shocked out of its pre-Copernican complacency by, for example, the Holocaust and the rise of anti-colonial movements. In the nineteenth century, those shocks were still to come. It would therefore be anachronistic to expect European literature to have re-set its default settings, which were presumably nationalist or at least national, and to have experimented even intermittently with cosmopolitan self-consciousness. Another field-defensive move would be to focus on the canon’s experimental outliers. As some of you probably know, there exists a body of scholarship qualifying the claim that outside Ireland the Irish Famine did indeed go unrepresented. Much of that scholarship deals with minor works by Trollope. To me, those works seem both aesthetically and politically uninspiring. But perhaps one can do better. More inspiring, among the potential counter-examples, would be Multatuli’s 1860 novel Max Havelaar: Or the Coffee Auctions of the Dutch Trading Company, which has been credited with starting the anti-colonial movement in Indonesia.  Or Tolstoy’s final work of fiction, Hadji Murat.

    Hadji Murat is set during the mid-19th century Russian conquests of the East that Tolstoy himself participated in as a young man and that so neatly mirror the genocide of the Native Americans that the US was carrying out in the same years in the American West.  At one point it describes the destruction of an indigenous village in the Caucasus in what would now be called Chechnya.  Tolstoy shows us the army’s burning of the Chechen village through the eyes of a Russian soldier.  The Russian’s mind is elsewhere, preoccupied with a theme that could not be more conventional for people like him: money he has lost at cards.  For him it is an unremarkable day, so the reader sees nothing remarkable: “War presented itself to him only as a matter of subjecting himself to danger, to the possibility of death, and thereby earning awards, and the respect of his comrades here and of his friends in Russia. . . . The mountaineers [he does not call them Chechens] presented themselves to him only as dzhigit horsemen from whom one had to defend oneself” (Tolstoy 2009: 78). Given this failure of imagination on the Russian side, the narrator must step in and, somewhat intrusively, make a connection on the next page that no one within the novel’s world is there to make:

    The aoul devastated by the raid was the one in which Hadji Murat had spent the night before his coming over to the Russians. . . . When he came back to his aoul, [Sado, at whose house we have seen Hadji Murat greeted hospitably in the novel’s first scene despite the extreme danger the host is in] found his saklya destroyed: the roof had fallen in, the door and posts of the little gallery were burned down, and the inside was befouled. His son, the handsome boy with shining eyes who had looked rapturously at Hadji Murat, was brought dead to the mosque on a horse covered by a burka. He had been stabbed in the back with a bayonet. (Tolstoy 2009, 79).

    The sentence about the child bayoneted in the back does not end the paragraph.  There is no pause for the drawing of conclusions, moral or otherwise.  It’s as if, from a Russian point of view, a Chechen child who has been bayonetted in the back is not ethically or emotionally forceful enough to interrupt the flow of narration, not enough to justify even the briefest of hesitations.  It’s not surprising that Tolstoy could not get that book published in full in his country in his lifetime.  It’s surprising that he left this record at all.

    Something could no doubt be said about the depiction in nineteenth-century literature of the poor and the homeless as internal aliens, hence sufficiently “other” to count as victims of atrocity in my limited sense.  I’m thinking of, say, Victor Hugo (the army firing on the barricades in Les Misérables) or Bleak House’s description of the death of Jo: “And dying thus around us every day” (Dickens 1998, 677).   One could also go back to the criterion that the NLR interviewers apply to Raymond Williams (and that Williams himself does not dispute): the premise that criticism should aim to reconstruct, through literature, “the total historical process at the time” (Williams 1981, 170).  Who says the novelists of the 1840s were obliged to talk about the Irish Famine, a (to them) invisible part of the (to us) larger causal system?[i] Perhaps this is asking for something the novel simply could not and even cannot deliver. Perhaps we should content ourselves with what it can deliver, even if that seems a humbler thing.  This line of thinking may have encouraged some critics to urge a dialing back of the political and ethical claims we make.  A modest anti-presentism of this sort would certainly make it easier for those 19th century specialists who are professionally uncomfortable with atrocity to return to what they were already doing, undisturbed by any nagging sense of responsibility to imperatives they see as coming from outside the field.

    My own impulse is not to back down from “the total historical process” criterion.  Which means I’m stuck with atrocity, however presentist the topic may seem.  What I’d like to try out therefore is a different negotiation between present imperatives and period loyalties, between history as the proliferation of differences (differences that may turn out to be trivial) and history as synthesis (synthesis that avoids triviality but could seem to lack rigor as the field defines it).

    The concept of atrocity may be new, but the thing of course is not. It seems admirable to me that much new scholarship is willing to hold off on the familiar nominalist-historicist move (there is no true history but the recent history of the name, the concept) and instead to take on the deeper history of as yet unnamed things, a trans-historical history much of which (like the atrocity) is inescapably pre-modern.  I’m thinking for example of the thunderous “no!” to periodization itself that is proclaimed in Susan Stanford Friedman’s Planetary Modernisms and the challenge to “periodizing divisions between premodern and modern” in the introduction that Saher Amer and Laura Doyle wrote to ”Reframing Postcolonial and Global Studies in the Longer Durée,” a special section of the latest PMLA (Friedman 2015, 331). Both texts accuse conventional periodization of sustaining Eurocentrism.  It seems to me that both share important concerns with the V21 manifesto and its impatience with period-centered thinking.

    I hope you agree that the V21 project belongs in the context of a broader acknowledgment that learning to work in an enlarged, trans-period time scale is no longer optional.  The reasons behind this new temporal common sense are not unfamiliar, but it may be helpful to gather a few of them together. Among the best known is the emergence of the term “anthropocene” to mark the salience of an ecological perspective at the level of the planet.  Among the least known is the emergence of an international movement of indigenous peoples, one premise of which is that colonialism is not something done solely by European settlers or done solely after 1492.  Joining the two are books like Pekka Hämälainen’s The Comanche Empire, which gives the Comanches credit, if that’s the right word, for themselves practicing colonialism, and justifies their conduct (again, if justifying remains a pertinent concept) in terms of their superior ecological adaptation.  Logically enough, the new sub-disciplines of “world” history and “big” history are notable for an impulse, sometimes conscious and sometimes not, to do without moral judgment entirely. Some declare that to arrange history around the values of “democracy,” for example, would be inexcusably teleological and provincial. The same vector appears in another important zone of temporal stretching: the postcolonial critique of Eurocentrism. Here of course it seems even more paradoxical, dependent as postcolonial studies has been on a politicized model of European core, non-European periphery. But as Alexander Beecroft has argued, this model, useful enough for the recent past, simply doesn’t apply for most of the world’s cultures during most of the world’s history. China and India two or three or four thousand years ago were in no sense peripheries to Europe’s core.  It would be temporally parochial, therefore, to take the particular inequalities and injustices of the recent past as a guide to the interpretation of Indian or Chinese culture. Thus the cosmopolitanism with which we are most familiar, call it cosmopolitanism in space, brings with it a corresponding cosmopolitanism in time, and this temporal cosmopolitanism ends up undermining habits of ethico-political judgment based on an outmoded core-periphery geography. Here I am re-describing the emergence of a somewhat depoliticized “world literature” out of a very political “postcolonial studies.” For better or worse, re-describing it in this way makes it harder to complain about.

    Expanding our time-frame seems inevitable. As does some evening out of the blame for imperialism, which can no longer seem the moral burden of Europe alone. The long-term question for V21, it seems to me, is how to manage this expansion beyond the period while sustaining the moral and political commitments that make the critical enterprise worth doing at all.  The immediate question is where in this revisionist scale and sense of history I can find a home for my interest in atrocity, an interest that takes for granted the centrality of critique.

    From this perspective, the first thing I notice about interesting new work on an expanded time-scale is that atrocity tends to get left out. For Amer and Doyle, the familiar European version of imperialism was only one in a long series of imperialisms before 1500, many of them non-European. Rather than insisting that the presence or absence of capitalism made all the difference, they suggest, we need to find a way of talking about European and non-European imperialisms in the same breath. That seems right. But what this can mean in practice is that imperialism’s violence is omitted, perhaps because it is assumed that moral critique of imperialism would be anachronistic and/or Eurocentric or because blaming has come to seem pointless and irrelevant.  Hence there is no vocabulary for atrocities. Historically speaking, Amer and Doyle are gradualists. The premodern for them was already modern; the difference is merely a matter of detail and degree. From their moderate anti-periodization position, anything that looks like violent rupture, such as modernity, is actually always the result of small, slow accretions.  It’s as if their distaste for violent rupture at the level of periodization is duplicated in a distaste for violence as social content. Violence exists for them, of course, but not as a conundrum; it’s not interesting enough to demand interpretation. What’s interesting about the world’s interconnectedness is commercial contact and cultural exchange. There are empires, but when it’s pre-moderns or (especially) non-Europeans who are doing the slaughtering and conquering, what suddenly kicks in is a great deal of respect for the empire-builders and for the cultural consequences of their empire- building.  Coercion is not absolutely forgotten, but it’s rarely stage center. This is arguably just as presentist as the older focus on domination and atrocity, but it’s presentist in a different way: a projection onto the past of globalization’s smug, all-cultures-are-equal case, a case which does not harp on inequalities of economic and political power.

    The closest Susan Stanford Friedman comes to a statement on imperial coercion is as follows: “empires typically intensify the rate of rupture and accelerate change in ways that are both dystopic and utopic” (Friedman 2015, 337). What she calls “brutalities” can of course be recognized, but only as a general phenomenon that 1) is balanced in advance by the “utopic” aspects of empire, and in part for that reason, 2) is in no way interesting or worthy of being investigated (Friedman 2015, 337). The problem here is not the reluctance to innovate of a sluggish, fuddy-duddy field.  The problem is the innovation, an anti-rupture position that makes things like atrocity harder to see, or to teach.  Sometimes that seems to be the whole point of innovating. I think for example of Rita Felski’s mobilizing of Actor Network Theory against “the rhetoric of negativity that has dominated literary studies in recent years: a heavy reliance on critique and the casting of aesthetic value in terms of negation, subversion, and rupture.”

    Neither history’s narrative form nor its social content can be all rupture all the time.  But unless it has rupture in it, it’s not history at all.  And even those of us who are most impatient with the restrictiveness of existing periodization should not want, finally, to give up on history as such.  Laura Doyle notes that there were slave revolts in the Abbasid Empire of the 9th century just as there were “anticolonial movements” in the twentieth century (Doyle 2015, 345). This observation only becomes genuinely historical if one goes on to ask whether the slave revolts of the 9th century might have been different in kind–more precisely, whether they were in fact anti-colonial or anti-imperialist.  They may have been, and they may not have been. These may have been slaves who not unreasonably preferred to have slaves rather than to be slaves.  The difference is important.  In order to know, you would have to be interested not just in the history of imperialism, but in the history of anti-imperialism.  You would have to decide that anti-imperialism has a history.  It’s the difference between asking when people were merely complaining that we suffer under imperial rule (probably as long as there have been conquests) and when they began saying that others may have suffered under our rule–a universalizing moment that is probably more recent and more rare. This would bring us back to the representation of atrocity as self-accusation.

    If there was a moment when the feeling “I am angry at your country for conquering mine and ruling it by a harsher standard than you apply to your own” metamorphosed into something like “it is wrong for any country, including my own, to conquer any other,” wouldn’t we want to know something about it?  It might turn out that this only occurs with or after that violent rupture we call modernity.  As a historical fact, wholesale raping, pillaging, plundering, and slaughtering are of course characteristic of many if not most pre-modern societies.  I think for example of the ethnic cleansing of the Midianites in the Old Testament, which raises a red flag for Moses only because his troops left the very old and the very young Midianites alive, alongside the nubile maidens, and therefore had to be told to go back and finish the job.  The chapter on the ancient Near East and classical Greece in David Johnston’s magisterial history of justice concludes that “commitments to freedom and equality” are “nowhere to be seen” in the domestic laws of ancient world, but it doesn’t even bother to ask about foreign policy–about the possible existence of scruples as to, say, violence against members of other groups, tribes, nations (Johnston 2011, 15).   For “our” treatment of “them,” there were no rules.  As Michael Freeman says in the entry for “Genocide” in the Dictionary of Ethics, Theology and Society: “Genocide was not a moral problem for the ancient world.  It is for the modern world because moral and political values have changed” (Clarke and Linzey 1996, 403). As everyone knows, the Greek word from which we get apology, apologia, “does not involve an acknowledgement of transgression and, thus, needs no request for pardon or forgiveness” (Lazare 2004, 24). Atrocity is everywhere in ancient times, but not (to my knowledge) as representation.  In the West, at any rate—I can’t speak for other cultures, and I have some trouble pretending to speak for the West—it is only when “the moral and political values have changed” that one can expect to see representations of atrocity.  If we say that the atrocity is a construct, one thing we would mean is that in order for it to be discussed, a moral norm that it violates first had to emerge or be invented.  It’s in this sense that, even if representations of atrocity are indeed missing from the great literature of the 19th century, the atrocity is also a nineteenth century topic.

    I am not talking here about Steven Pinker’s highly questionable argument that modernity is in some fundamental way opposed to violence.  (This from someone whose book has no entry in its index for “colonialism”!)  I am talking only about the emergence of moral norms, whether or not those norms were violated in practice.  This story is untellable without the nineteenth century.  You know the moments of emergence I have in mind: the transfer of Jacobin ideals to the Haitian Revolution, Burke on Warren Hastings, Marx on the British in India, Henri Dunant deciding at Solferino that warfare had to be regulated, Tolstoy deciding that the Chechens should be permitted to survive as Chechens, and so on. I think it’s also a story that we could find, if we chose to look, entangled in the forms of the 19th century canon.

    What would it say about us if, for fear of falling into Whiggish triumphalism, we turned out to be incapable of acknowledging even that moral history, partial and incomplete and unsatisfying as it is?  One thing it would say is that we prefer to leave atrocity without a history.  I hope we don’t.  There is of course a deep, largely unacknowledged tension between the working assumptions of the humanities and the idea of progress—progress even as a possibility.  Any admission of possible progress threatens the value of canonical texts. That’s arguably why we have been so eager to prostrate ourselves before Walter Benjamin’s Angel of History rather than asking, in a secular and open-minded way, whether what we see before us is really nothing but an ever-increasing accumulation of ruins.

    According to Helen Small’s definition in The Value of the Humanities, the humanities “respect the products of past human endeavors in culture, even when superseded” (Small 2013, 57).  “Even when superseded” is a phrase you don’t hear much in literature departments.  To admit that cultural products and endeavors might ever be “superseded” is to call in question our presumptive respect or rather reverence for them, which Small is trying here to affirm, and that is a prospect that critics less courageous than she is would prefer not to recognize.  And yet there are moments when, like Helen Small, we are all brave enough to admit to some some progressive thinking.  About our assumptions on race, class, gender, and sexuality, which we assume (correctly) to have improved.  Or about “our own work.”

    In her book The Deaths of the Author, Jane Gallop notices that when Gayatri Spivak talks about her work as the author of A Critique of Postcolonial Reason, she uses the word “progress,” as in the sentence, “My book charts a practitioner’s progress” (Gallop 2011, 130). “‘Progress,’ Gallop goes on, “does not seem like a word one would expect Spivak to use.  The word ‘progress’ generally denotes the most triumphant relation to temporality.  ‘Progress’ here represents the least troubled or troubling, the most positive version of a writer’s change over time” (Gallop 2011, 130).  In fact, she concludes, this somewhat conventional phrasing is “quite atypical of the book” (Gallop 2011, 131).[iii]

    A similar inconsistency pops up in Max Weber’s famous lecture Wissenschaft als Beruf (Scholarship as a Vocation).  It is the strong argument of that lecture that we have fallen into what Weber calls polytheism, a somewhat melancholic condition in which progress is impossible because each collectivity follows its own gods and there is no commonly shared membership, no overarching religious or political principle that would adjudicate among them or mark out any course of action as an advance over any other.  And yet Weber also says that scholars-to-be must resign themselves to seeing their work rendered obsolescent by those researchers who come afterwards.  Unlike art, where “there is no progress,” Weber says, scholarship or Wissenschaft (the translation calls it “science”) “is chained to the course of progress” (Weber 1946, 137).  “In science,” as a result, “each of us knows that what he has accomplished will be antiquated in ten, twenty, fifty years.  That is the fate to which science is subjected; it is the very meaning of scientific work … Every scientific ‘fulfilment’ raises new ‘questions’; it asks to be ‘surpassed’ and outdated.  Whoever wishes to serve science has to resign himself to this fact” (Weber 1946, 138). If our work will be surpassed and outdated, that is not just something to which we have to resign ourselves; it’s not just a grim fate to which we are “chained.” It’s also a fact that ought to give us a certain satisfaction. It means we belong to a collectivity which recognizes the value of our work, takes advantage of it, and builds on it. The suggestion here is that you would need to feel you belong to a relatively tight collectivity in order to be able to experience progress. So there is such a thing as progress after all— progress at the level of research, progress within the community of scholars, provided that the community of scholars really is in a strong sense a community.

    I have made a little collection of instances like these in which a scholar will deny progress in general but affirm it within the domain of scholarship.  The point is not to poke fun.  This apparent contradiction can be explained, I think, without any indignity to the scholars concerned.  The reason we can acknowledge progress within scholarship is that as scholars we feel ourselves to belong to a collectivity. As citizens, on the other hand, collectivity of this sort is not something we tend to experience on a regular basis or indeed to seek out. At a recent conference on Stuart Hall, I found myself saying that if Hall defended the now old-fashioned-sounding idea of “theoretical gains,” it was because he thought of himself first and foremost not as a writer and scholar but as a member of a movement. If you are a member of a movement, you have a rough measure by which progress can be calculated. Progress is no longer unthinkable or embarrassing.  Hall’s example is worth contemplating, and not just so as to achieve consistency. I don’t see why those of us who think of ourselves as progressives–and there are a lot of us– are so reluctant to seek real-world equivalents for the scholarly experience of collectivity, thereby permitting us to recognize in the world we write about more of the progress we sometimes recognize in our own writing.

    I’m not trying to encourage Whiggish or Eurocentric complacency.  At present, all I really have is questions and areas for further research. I for one would like to know how it was possible for Ishikawa Tatsuzo’s 1938 novel Soldiers Alive to document atrocities committed by his fellow Japanese against Chinese civilians within months of the 1937 Rape of Nanjing.[iv] Were there precedents in the Japanese literature of the 19th century that prepared for this extraordinary feat?  Or perhaps earlier?  I’m sure there is more than one path leading to national self-accusation, both on the global scale and within the various European traditions.  At whatever risk to the hypotheses advanced thus far, I would like to know more about Grimmelshausen’s Simplicissimus, with its extraordinary accounts of the atrocities committed during the Thirty Years’ war, or before that Bartolomé de las Casas, with his extraordinary accounts of atrocities committed during the Spanish conquest of the Americas, or before that Euripedes’s Trojan Women.  It seems odd to me that no one considered it essential to my education–that I was not taught, and still don’t know when North Americans became conscious that there might be an ethical problem with the genocide of the Native Americans. I’m convinced that with a little work, we could come up with trans-periodic constellations of both research and pedagogy that would link earlier and later texts, and would do so in a way that is concretely rather than abstractly respectful of the past—that is, would take the past as something more than an empty figure of resistance to a present about which all we need to know is that we are against it.

    The 19th century’s failure to produce representations of atrocity as self-accusation, if that is indeed the case, can be explained by the non-existence in the 19th century of a “public” on an international scale, a public capable of demanding or enforcing scrutiny of ourselves from outside.  Incomplete as it may be, it seems to me there is a story here about the emergence of such a public.  Publics get constructed. The process of construction takes time: alien voices must be gathered and listened to.  It also takes an attitude toward time.  We cannot imagine ourselves as engaged in the process of constructing anything if we see every “chain of events” as (you will recognize the quotation) “one single catastrophe, which keeps piling wreckage on wreckage” (Benjamin 1969, 257).  What we ask our fellow specialists to join is a story with a future.

    References

    Benjamin, Walter. 1969. Illuminations. Edited by Hannah Arendt. Translated by Harry Zohn. New York: Schocken Books.

    Clarke, Paul A. B., and Andrew Linzey. 1996. Dictionary of Ethics, Theology and Society. London: Routledge.

    Dickens, Charles. 1998. Bleak House. Edited by Stephen Gill. Oxford: Oxford University Press.

    Doyle, Laura. 2015.  “Inter-Imperiality and Literary Studies in the Longer Durée,” PMLA 130:2 March 2015, 336-347.

    Felski, Rita. no date. “Comparison, Translation, and Actor-Network Theory,” manuscript available from the author.

    Friedman, Susan Stanford. 2015. Planetary Modernisms: Provocations on Modernity across Time. New York: Columbia University Press.

    Gallop, Jane. 2011. The Deaths of the Author Reading and Writing in Time. Durham, NC: Duke University Press.

    Johnston, David. 2011. A Brief History of Justice. Chichester, West Sussex: Wiley-Blackwell.

    Lazare, Aaron. 2004. On Apology. New York: Oxford University Press.

    Ondaatje, Michael. 1992. The English Patient. Vintage.

    Sémelin, Jacques. 2007. Purify and Destroy: The Political Uses of Massacre and Genocide. London: Hurst & Company.

    Small, Helen. 2013. The Value of the Humanities. Oxford: Oxford University Press.

    Tolstoy, Leo. 2009. Hadji Murat. Translated by Richard Pevear and Larissa Volokhonsky. New York: Vintage.

    Weber, Max. 1946. From Max Weber: Essays in Sociology. Edited by Hans Heinrich Gerth and C. Wright Mills. New York: Oxford University Press.

    Williams, Raymond. 1981. Politics and Letters: Interviews with New Left Review. London: Verso.

    Notes

    [i] I realized how hard the Williams/NLR interview hit me only after noticing, while preparing this essay, that I had already used it to begin one of my own early publications, an essay on Bleak House written in the 1980s and published in Homi Bhabha’s collection Nation and Narration.

    [ii] Perhaps this is not the proper or precise sense in which novels belong to history, and history belongs in novels.

    [iii] This and the following paragraph appear in my article “Hope,” Political Concepts: A Critical Lexicon, posted November 2015, www.politicalconcepts.org/hope-bruce-robbins.

    [iv] Ishikawa was arrested by the Japanese authorities and convicted, but then released and allowed to return to China on condition that he never write anything like that again.  He didn’t.  Despite my complete ignorance, I have the fantasy of trying to create a global counter-history of such moments of national self-critique.

     

  • Molly Clark Hillard: Literary Subjects

    Molly Clark Hillard: Literary Subjects

    by Molly Clark Hillard

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    In a recent New York Times article, Ishiguro said “as for Brontë, well, I owe my career, and a lot else besides, to Jane Eyre and Villette” (2015). Speaking at the Seattle Public Library on his 2015 novel, The Buried Giant, Ishiguro elaborated:

    I have loved Jane Eyre and Villette…for some time, but…when I re-read them about three years ago, I suddenly realized how much I had ripped off from those two books…I read [them] with the usual pleasure and admiration, but also with some kind of private embarrassment…and in particular…those two books are absolutely fantastic for that…very coy way of the first person narrator…appearing to confide, very intimately, with the reader and then you suddenly find actually that there is some huge, hugely important, thing that the narrator has just held back…and I realized that that kind of thing had influenced me greatly in the way I write….Moments where you learn that Jane Eyre is crying, not because she the narrator says “I was crying”…but because the person she is talking to…says “what’s that in your eye, Jane…” and I thought “Whoops!” Exactly the same technique. (2015)

    This quote illustrates more than simple literary influence; here Ishiguro avows his interest in the relationships and power dynamics between readers and authors, in both the effect and affect of reading. He is not just aware that Victorian novelists do this too; he indicates that his technique is more than merely analogous to Victorian novelists. He owes, he says, more than just his career to Brontë.  Timothy Bewes has said that “Ishiguro offers no clues about how to read him” (2007: 205), but Ishiguro’s quote, it seems to me, suggests otherwise.  I would at least like to ask whether what happens in certain 21st century novels is something other than, more than, postmodern pastiche.  Perhaps another way to pose the problem is this: what if periodicity becomes unimportant or secondary next to our subjectivity, our constitution of selfhood within a literary history?

    Since the 2005 publications of Ishiguro’s Never Let Me Go and Ian McEwan’s Saturday, we have been called to consider the network produced between 19th and 21st century novels.  What do 19th-century novels do for 21st-century readers? What do they do for 21st-century novels? What, in turn, does juxtaposing 19th and 21st-century novels do for our understanding of literature itself? The V21 Collective exhorts us to just these questions; the work issuing from the group offers a collectivity of Victorian and 21st century thinking, as much as a human collective of scholars.  In their manifesto and elsewhere, V21 asks whether Victorian literature still matters. If it does, if we have not “transcended” these plots, these characters, these ideologies and problems, then whither next?  Even more fundamentally, V21 prompts us to consider whether reading itself is still a viable technology.  The query is bound to related concerns about the future of the liberal arts university, which is based in great measure on the art and science of reading, and in corollary beliefs that reading is one thing (of many) that makes us human, and that the activity of reading bridges the division between the personal and the communal.  In light of declining English majors nationwide, such questions are neither axiomatic nor sentimental.

    So, what kinds of projects might the spirit of V21 make possible?  We might, for instance, reflect on Victorian novels that offer scenes of reading and re-reading.  Frankenstein, Jane Eyre, Wuthering Heights, Mill on the Floss, Daniel Deronda: these are all works in which acts of reading begin or escalate the action of the novel, in which books—history books, science books, devotional books—are central to the text’s aims.  The novels feature characters whose acts of reading may make or mar them, but in one way or another seal their fates.  These characters insert themselves into a literary history—not only resonating with it or speaking back to it, but also actually taking the book as literal or real.  Frankenstein’s creature reads Paradise Lost as “a true history”(Shelley 2003: 132); Jane Eyre sees Gulliver’s Travels as “a narrative of facts” (C. Brontë 2003: 28); Maggie Tulliver and Mr. Lockwood are in thrall to found manuscripts with handwritten marginalia that directs or arrests their attention. I would argue that these characters are literary subjects; by calling attention to the books in their hands they remind us of the books in ours, and their fabrication, their materiality. Simultaneously, though, they suggest that all our lives are bound to, subject to, subjects of, the books we read.

    If we were to turn, next, to Anna Kornbluh, for whom in comparative reading, “transtemporality or acontextuality is integral, a thought that gains gravity precisely by virtue of its repetition in history,” we might then look with fresh eyes at certain contemporary British novelists who make returns to Victorian literature, “going back and working on” Victorian plots, genres, and characters over the course of the narration (Ishiguro 2015: 115).  Novels like Kazuo Ishiguro’s Never Let Me Go, Ian McEwan’s Saturday, and Zadie Smith’s White Teeth, each in their own way, announce that it is from Victorian literature that they have learned to read.  Their authors present to us a set of palimpsested characters that demand, like their Victorian counterparts, to be read as literary subjects. We are used, perhaps, to define literary subjectivity as does Simon During: “a love of literature, more or less disjunct from explicit identification with political programmes,” the “disposition to engage intensely with [literature],” and the “production of fictions and simulacra and the provision of spaces and occasions for individuals to be communicated to” in a kind of “secular mimesis” (1996: 5).  And in doing so, we generally associate it with an embarrassing lack of critical distance.  But if we were to take literary subjectivity more literally, we might begin to see things differently. We might begin to see things like a character in a Victorian novel.

    Transplanting, recycling, palimpsesting: these are activities to which I suggest we might append the common term “re-read.”  Indeed, the Ishiguro quote that begins this piece highlights re-reading as integral to his writing.  As a re-reader myself, I have begun to wonder exactly what re-reading does for us and to us.  As a Victorianist, I wonder what it did for and to Victorian readerships.  The epistemology of re-reading has gained critical attention in recent years in the fields of affect and empathy studies, educational history, book history, and reader response.[1] Yet no scholar has yet given re-reading quite the metaphoric register that I think it deserves. Re-reading is something that an individual does with a specific text, to be sure, and for many reasons: to memorize, to self-soothe, to amend misprision, to discern anew, to layer interpretations. The very term “re-read” originated in the nineteenth century, and I suspect that the word was coined because re-reading is implicitly connected with the development of the Victorian novel and techniques of reading it. For instance, free indirect discourse necessitates re-reading in order to conceive narrative double valence; and in an age of serial publication, completed novels were collected and bound, in part to be re-read.  Bearing in mind Kornbluh’s call to construct “a grammar of resonance,” I’ve begun to wonder whether “re-reading” could also express the diachronic transference of literary bodies, one into the other, as intertexts.

    One possible outcome of V21’s call for presentist, formalist, and comparative interpretation is for us to recognize in certain novels from Victorian and contemporary periods a community that exists across time as well as space, in the leaves of books as well as in a timestream.  This literary community (network, as Latour would have it) is “sociable” in Rita Felski’s terms, but not homogeneous, not universal. Books do not always offer a “safe space” of warm assimilation.  In recognizing the Victorian literary and cultural material that lives on within them, contemporary novel characters also must recognize their own unoriginality. They are, in some sense, copies. Paradoxically, though, a literary community is also vitally important to constituting their personhood, and to build any kind of human belonging that matters.  These authors suggest, perversely, that we become human through the books we read and re-read, that we carry within. We are, to borrow loosely from Jane Bennett, part book in ways that are pleasurable as well as painful. 

    References

    Ablow, Rachel. 2009. Oscar Wilde’s Fictions of Belief. NOVEL: A Forum on Fiction 42, no. 2: 175-182.

    Bennett, Jane. 2010. Vibrant Matter. Durham, N.C.: Duke University Press.

    Best, Stephen and Sharon Marcus. 2009. “Surface Reading: An Introduction.” Representations 108, no. 1: 1-21.

    Bewes, Timothy. 2007. “Editorial Note.” In “Ishiguro’s Unknown Communities.” NOVEL: A Forum on Fiction 40, no. 3: 205-206.

    Brontë, Charlotte.  2006.  Jane Eyre. London: Penguin.

    Brontë, Emily. 2003. Wuthering Heights. London: Penguin.

    During, Simon. 1996. “Literary Subjectivity.” Journal of the Association for the Study of Australian Literature, NV. 1-12.

    Eliot, George. 1995. Mill on the Floss. London: Penguin.

    Felski, Rita. 2011. “Context Stinks!” New Literary History, 42. no. 4: 573-591.

    Ishiguro, Kazuo. 2015.  “Kazuo Ishiguro: By the Book,” New York Times Sunday Book Review, March 5.

    —–. 2015. “Kazuo Ishiguro reads from his much anticipated new novel, ‘The Buried Giant’.”

    Seattle Public Library, March 30.  http://www.spl.org/library-collection/podcasts/2015-podcasts.

    —–. 2005. Never Let Me Go. New York: Vintage.

    Kornbluh, Anna and Benjamin Morgan, “Manifesto of the V21 Collective.” V21: Victorian

    Studies for the 21st Century. Web. http://v21collective.org/manifesto-of-the-v21-collective-ten-theses/. Accessed 6/2/2016.

    Latour, Bruno. 1993. We Have Never Been Modern. Harvard: Harvard University Press.

    Moretti, Franco. 2013. Distant Reading. London: Verso.

    O’Gorman, Francis. 2012. “Matthew Arnold and Re-Reading.” The Cambridge Quarterly 41, 2: 245-261.

    Price, Leah. 2013. How to Do Things With Books in Victorian Britain. Princeton: Princeton University Press.

    “reread, v.” Oxford English Dictionary Online. Oxford: Oxford University Press. Accessed 9/3/2016.

    Shelley, Mary. 2003. Frankenstein. London: Penguin.

    Notes

    [1] Rachel Ablow has investigated how (for Oscar Wilde) re-reading fiction enables a kind of vicariousness through which one can “try on” the affective register of belief (2009: 179-180).  Christopher Cannon considers the history of re-reading, tracing it from the Greeks to Locke in the sense of memorization or “knowing by heart” for the educational purposes of self-improvement or the medicinal properties of habit. Similarly focused on the historical mode, Rolf Engelsing describes a late eighteenth-century shift from the “intensive” re-reading of a few prized texts to the “extensive” consumption of many ephemeral ones while Leah Price counters that “some genres—particularly the novel—appear to have elicited a newly intensive reading at precisely the historical moment to which Engelsing traces its decline” (Price 2013: 318). Francis O’Gorman investigates what Matthew Arnold had to say about the effects of returning to a single poetic text over long spans of time; he notes that the poet was conflicted as to whether the purpose of re-reading was “to counter forgetfulness,” or to “investigate new perceptions” (2012: 250).

    CONTRIBUTOR’S NOTE

    Molly Clark Hillard is Associate Professor of English at Seattle University.  She is the author of Spellbound: The Fairy Tale and the Victorians (Ohio State UP, 2014).

  • Nathan K. Hensley: Swinburne’s Oxford Notebook: Violence in/as Form

    Nathan K. Hensley: Swinburne’s Oxford Notebook: Violence in/as Form

    by Nathan K. Hensley

    Figure 2. Poems and Ballads (1866), editions published by Moxon (L) and Hotton (R).

    The book I’ve chosen to describe for this brief position paper is not a book at all, really, but a book in the process of becoming: call it an essay, as in a trial or experiment. It’s one of Swinburne’s notebooks from his undergraduate years at Oxford. Some of this writing would later be “upcycled” into Poems and Ballads, of 1866 (that’s Antoinette and Isabel’s great term, from Ten Books), and in Figure 2 you can see the first, respectable edition of that infamous book, put out by Richard Moxon, alongside the second, pornographic one, issued after the indecency charges, published by John Camden Hotten.

    As is true of all books, the composition, compilation, and publication of Poems and Ballads left in its wake a jumbled collection of cancelled versions, outtakes, and half-formed trials: a train of loose material and juvenilia, spread now across archives in England and the US, some of it miraculously living at my own university, that would never be crystallized into any final public form at all.

    Figures 3, 4, 5. A. C. Swinburne’s Oxford Notebook (1859?). Booth Family Center for Special Collections, Georgetown University.
    Figures 3, 4, 5. A. C. Swinburne’s Oxford Notebook (1859?). Booth Family Center for Special Collections, Georgetown University.

    The non-book depicted above is one such record of abandoned energy or thought-in-motion, a testament, I mean, to writing as a process and not a thing. Orphaned in an archive in Washington, DC, it would have been incapable of “shaping empire” in models of analysis that borrow from Foucault or Althusser or just the intellectual conventions of our field to assess how a text might (in the words of Ten Books) “influence … imperial discourse and power” (Burton and Hofmeyr 2014: 3).

    In my work I’ve tried to pivot away from terms like discourse, influence, and power, and toward another set of conceptual levers — literary form and sovereign violence — to ask how nineteenth century thinkers used literary presentation to conceive their modernity’s uncanny coincidence with brute force. Part of this means expanding what it might mean for a book to be “about” empire, and could (I hope) help shift us away from the usual suspects of our “literature and empire” syllabi and toward the era’s anatomies of harm, catastrophe, and human waste: so Wuthering Heights, The Mill on the Floss, and Our Mutual Friend provisionally in place of Kipling and Conan Doyle. It also might push us to look for conceptual productivity rather than ideological inscription. The question becomes not how common sense circulates, discourses accrue, or ideologies stick, but how literary texts work to imagine the new.

    Of course, one provocation of Ten Books that Shaped the British Empire is to ask whether books shape empire at all, and to answer that we would need to know what a “book” is and what “shaping” means — and the authors address these questions– but also what constitutes “the British empire.” What do we talk about when we talk about empire?  The question is more difficult than it sounds, and I think Swinburne can help.  What you see below is the first page of a never-published poem in the Oxford notebook called “The Birch.”

    Figure 6. A. C. Swinburne’s Oxford Notebook (1859?), detail of “The Birch.” Booth Family Center for Special Collections, Georgetown University.
    Figure 6. A. C. Swinburne’s Oxford Notebook (1859?), detail of “The Birch.” Booth Family Center for Special Collections, Georgetown University.

    In it, Swinburne lovingly describes the pleasures of being beaten with a wooden rod.  He lingers on the opened flesh, the dripping fluids, the sublime pleasures of all this.  Like other of Swinburne’s Sadean flogging poems –dismissed as subliterary by Steven Marcus but expertly read by Yopie Prins– “The Birch” is a poem in praise of being beaten, and in this it well evinces what Ellis Hanson elsewhere in this series of blog posts refers to as “kink.” It is also, as Prins (2013) notes of other Swinburnean flogging poems, a poem about what poetry is and does, and is therefore, I’ll say, a poem not just about desire or violence but about form itself.

    There’s no space for a real reading in this short and telegraphic blog post, but trust me that Swinburne’s speaker mocks the right-minded people who would deny the delights of what the poem with jarring fondness calls “chastise[ment].”

    Figure 7. A. C. Swinburne’s Oxford Notebook (1859?), detail of “The Birch”:“Never again, they cry, shall schoolboy’s blood | Blush on the little twig of the well-work rod.” Booth Family Center for Special Collections, Georgetown University.
    Figure 7. A. C. Swinburne’s Oxford Notebook (1859?), detail of “The Birch”:“Never again, they cry, shall schoolboy’s blood | Blush on the little twig of the well-work rod.” Booth Family Center for Special Collections, Georgetown University.

    Taking this fondness for vexation yet further is my favorite poem and ballad in the published collection of that name, “Anactoria.” That poem places at the literal, mathematical center of its long catalogue of physical vexations what its speaker refers to as “the mystery of the cruelty of things”: the phrase comes from lines 152-154 of the 304 line poem. And like “Anactoria,” “The Birch” puts harm at the very core of its system: physical violence is the dark star around which orbit all its other affects, pleasure included. Swinburne’s early verse, I’m saying, anatomizes violence and understands somatic injury as its conceptual degree zero.

    But like the other Poems and Ballads composed in this period, “The Birch” unfolds within a fantastically rigorous formal structure. Elsewhere it’s roundels and Old French verse forms; here it’s end-stopped couplets, a grid of masculine rhymes and mostly iambs that is slashed over with flaying strokes from Swinburne’s fountain pen. These marks lacerate the tight form of the poetry they overwrite but do not cancel.

    Figure 8. A. C. Swinburne’s Oxford Notebook (1859?), “The Birch,” two details: cancellations (L), lashes (R). Booth Family Center for Special Collections, Georgetown University.
    Figure 8. A. C. Swinburne’s Oxford Notebook (1859?), “The Birch,” two details: cancellations (L), lashes (R). Booth Family Center for Special Collections, Georgetown University.

    In lavish, six inch strokes, Swinburne inscribes onto the manuscript of “The Birch” a tension between extravagant harm and regulative form: a co-traveling of rage and order that this manuscript presentation does not –need not– resolve.  Crucially for my sense of this as an act of materialized political thinking, physical violence is here uncannily bound up with the very regulative ensemble it seems to contravene. The physical capacities of this manuscript enable that suspension.

    Since this is a short post and I discuss these questions at more length in a forthcoming book, I’ll end listwise, with three things that make this object useful to me as a kind of tactical metonymy, the crown for the king, in this conversation about critical engagements with empire now:

    (1) It is singular; no other object on earth is identical with it, and as I’ve only just been able to hint at here, it is not identical with itself either.

    (2) It is — and this should be obvious– material. It is a physical object whose physicality is part of its apparatus for making meaning. As this suggests this object is also highly conscious of itself as form; its effects depend on what George Saintsbury (with Swinburne as an example) understood as “the laws of meter” (1910: 25): I mean the restraining or (in the Kantian sense) regulative functions of form that Swinburne here luxuriously overcodes.

    Finally (3), it is thought. Swinburne is not writing about India, not describing trade routes or troop movements or the suppressions of rebellions. He is instead writing about violence: and the point is that for an empire that routed its self understanding through the concept of law, this effort to think obscene violence and regulative form together makes “The Birch” political theory for the age of liberal empire.

    In its pitiless, I will say diagnostic analysis of how legality and harm travel together, and in its marshaling of poetic form to enact this cotraveling, Swinburne’s notebook pushes us away from vestigially empiricist models of influence and toward an understanding of how literary presentation can enact thought. But this object also does something more, which is to help us know empire as what it is: the targeted application of physical violence against certain bodies for the benefit of others — the mystery of the cruelty of things. As belated readers of documents like this, our tasks might be, first, to show how the Victorian thinkers we love mediate this obscene mystery into form, and second, if we can stomach it, to use those encounters as a way to begin reconceiving the present.

    References

    Burton, Antoinette, and Isobel Hofmeyr, eds. 2014. Ten Books that Shaped the British Empire: Creating an Imperial Commons. Durham, NC: Duke University Press.

    Marcus, Steven. 1975. The Other Victorians: A Study of Sexuality and Pornography in Mid-Nineteenth Century England. New York: Basic Books.

    Prins, Yopie. 2013. “Metrical Discipline: Algernon Swinburne on ‘The Flogging Block.’” In Algernon Charles Swinburne: Unofficial Laureate, edited by Catherine Maxwell and Stefano Evangelista. Manchester: Manchester University Press.

    Saintsbury, George. 1910. A History of English Prosody: Volume III, From Blake to Mr. Swinburne. London: Macmillan & Co.

    Swinburne, Algernon Charles. 1859[?] Oxford Notebook. Manuscript notebook, Booth Family Center for Special Collections, Georgetown University.

    —. 1866. Poems and Ballads. London: Edward Moxon.

    —. 1866. Poems and Ballads. London: John Camden Hotten.

    CONTRIBUTOR’S NOTE

    Nathan K. Hensley is assistant professor of English at Georgetown University. He is the author of Forms of Empire: The Poetics of Victorian Sovereignty (2016).

     

  • Sebastian Lecourt: The Light of Asia and the Varieties of Victorian Presentism

    Sebastian Lecourt: The Light of Asia and the Varieties of Victorian Presentism

    by Sebastian Lecourt

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    The complaint that the term Victorian, with its ambiguous conflation of nation, period, and personage, represents an undo stumbling block for scholars of Dickens or Eliot is hardly new. Indeed, in many ways it belongs to a wider crisis of categories instigated by postcolonial theory. One of the main lessons that figures such as Said taught us, after all, was that so many of the genre and period tags organizing our field – west and east, modern and ancient, novel and epic – are ideological projections that function to pull diverse global histories into the master narratives of western modernity. Over the past two decades, transnationally minded critics have sought to take this critique on board in a number of ways. Some have deliberately explored the ideological freight of western comparative forms through a re-politicized formalism in the tradition of Lukács (Puchner 2006; Slaughter 2007; Esty 2011). Others have embraced a new particularism that examines how individual texts, as they circulate internationally, can be taken up in surprising ways that belie their Eurocentric roots (see the essays in Burton and Hofmeyr 2014). Still other critics have looked toward the world systems that make such circulation possible (Moretti 2000).

    Within Victorian studies itself, Caroline Levine and Priya Joshi have used elements of the latter two approaches to reimagine the term Victorian, not as a national or period marker, but instead as the name of a transnational media network built by Queen Victoria’s agents – a sprawling infrastructure of printing presses, railroads, telegraphs, and educational institutions that disseminated imperial media around the globe (Joshi 2002; Levine 2013). The refreshing thing about this approach is that it expands the idea of the Victorian temporally as well as geographically, opening up a kind of presentist optic that permits us to read Victorian literature beyond the horizon of its immediate historical context. Once you do the legwork of reconstructing this Victorian media network, you discover that a great deal of our contemporary information world, from the Indian public libraries that interest Joshi to the Gothic and Pre-Raphaelite affects haunting contemporary pop music, is built upon Victorian foundations. What is more, you realize that we encounter a striking amount of pre-Victorian culture as it was remediated by Victorian writers. The Oxford philologist Max Müller’s translation of the Upanishads, for example, may yet be found at major bookstores and free online in countless e-editions. Call this historicism as presentism, a historicism that treats today as a reality constituted by multiple deep pasts.

    I have recently explored this critical landscape on the v21 blog and elsewhere.[i] At last October’s V21 Symposium, however, Jesse Rosenthal drew our attention to one danger in such an approach: the danger of too easily privileging those aspects of Victorian media that we fancy make the most natural precursors for ourselves. In this golden age of television, the serial publication of the Victorian novel can seem a lot more interesting than the adaptation of Victorian novels into lavish theatrical productions, a practice that resonates better with the bestseller-to-blockbuster pipeline of 1990s Hollywood. The risk of presentism, in other words, is that we might return to a kind of Whig history in which the past functions primarily to lead to ourselves.

    What I want to suggest here, though, is that Joshi’s brand of diffusionary history also has resources for resisting this kind of circularity. Specifically, I have found it instructive to read Victorian literature, as she defines it, not just through its contemporary afterlives but also through its uptake by subsequent periods – in particular, to revisit nineteenth-century texts that we no longer consider important but represented seminal works to readers in the 1920s or the 1960s. Recently, for instance, I have written on The Light of Asia, an epic poem about the Buddha published by Edwin Arnold in 1879 (Lecourt 2016a; 2016b: 114). Arnold (no relation to Matthew) taught for years in India before returning to London in the seventies to work as a journalist and poet. Although The Light of Asia was but one of several adaptations of Asian religious works that he published over the following years, it would become an especially celebrated bestseller, going through dozens of editions in multiple languages and inspiring both stage and screen versions. Mahatma Gandhi credited The Light of Asia, along with Arnold’s verse translation of the Bhagavad-Gita, with rekindling his interest in Indian religion, while T. S. Eliot would recall the poem fondly as something that had expanded his mental horizons as a young man (Clausen 1973; Franklin 2005). Meanwhile the poem also had a major impact upon emerging Buddhist nationalisms from Ceylon to Japan to Burma.[ii]

    In both metropolitan and colonial contexts, Arnold’s poem helped promulgate a Protestantized construction of Buddhism as a religion that was about neither rituals nor doctrines but rather moral individualism (McMahan 2008). While we think of this vision of Buddhism as a phenomenon of the twentieth century – the modernist rebellion against Victorian religious morality, or postwar Baby Boomer frustrations with middle-class materialism – it might better be described as Victorian Protestant earnestness turning its righteous gaze against Protestantism itself, an evangelical anti-formalist polemic that has latched onto a non-western religion in order to chide its own culture. Recognizing it as such reveals that the line between presentism and historicism, reading the past through the lens of our priorities and assessing it on its own terms, can be quite hard to draw. Not only do we frequently receive the past as mediated by other periods, but the stances from which we criticize particular historical epochs may rest upon foundations built within them. In the case of Arnold’s poem, where once we might have seen a period and its various afterlives, we now perceive a set of constantly mutating preoccupations that are as vital in current-day America and Japan as they were in Victorian England or Ceylon. This is just standard dialectical history, of course, but it reminds us that presentism can never be completely present, and if done self-consciously can encourage a great sensitivity to the complexities of the past.

    Moreover, reading Victorian texts as they influence us through intervening cultural moments can strengthen historicist practice by highlighting how, in reframing the past around our own concerns, we inevitably take part in a certain history. In her paper at last October’s V21 Symposium, Anna Kornbluh championed the power of anachronistic reading to juxtapose different texts from across literary periods and thus rescue us from the myopia of contextual interpretation. “What Susan Stanford Friedman has called ‘cultural parataxis,’ the radical collage of texts from different geohistorical coordinates,” she ventured, “can produce new textual insights and new theoretical insights” (Kornbluh 2015). Tracing the multiple afterlives of something like The Light of Asia, however, puts anachronistic reading itself into a kind of historical perspective by showing that such willful comparison of literary materials out of period is not some gesture against history but rather the latest episode in the history of what Levine calls affordances: the way in which literary forms are both in control of their own history and not, suggesting a certain set of imaginative possibilities that only others can realize for them (Levine 2014: 6-7).

    Indeed, a global, cross-period historicism might actually embolden an anachronistic hermeneutic by letting us compare the ways that we reframe nineteenth-century literary materials with how other periods have done it – letting us see, that is, how our anachronistic readings take part in the ongoing process by which forms are used and reused, disseminated and appropriated. My own copy of The Light of Asia, an 1889 edition published by Roberts Brothers in Boston, belonged a professor at a small religious college in northern California where my mother works. His copy, in turn, was inscribed in pencil by a Margaret Burr back in 1890. I cannot say what either reader made of the poem, though I assume that their takes differed from mine, which is driven both by memories of a teenage interest in Buddhism and by a scholarly preoccupation with the history of religious studies. But it fascinates me that we are part of the same history, dependent in some sense upon that imperial encounter in South Asia a century and a half ago.

     

    References

    Blackburn, Anne. 2010. Locations of Buddhism: Colonialism and Modernity in Sri Lanka. Chicago: University of Chicago Press.

    Burton, Antoinette and Isabel Hofmeyr. 2014. Ten Books That Shaped the British Empire: Creating an Imperial Commons. Durham: Duke University Press.

    Clausen, Christopher. 1973. “Sir Edwin Arnold’s ‘The Light of Asia’ and its Reception.” Literature East and West 17: 174-91.

    Esty, Jed. 2011. Unseasonable Youth: Modernism, Colonialism, and the Fiction of Development. New York: Oxford University Press.

    Franklin, J. Jeffrey. “The Life of the Buddha in Victorian England.” ELH 72 (4): 941-974.

    Gombrich, Richard and Gananath Obeyesekere. 1988. Buddhism Transformed: Religious Change in Sri Lanka. Princeton: Princeton University Press.

    Harris, Elizabeth J. 2008. Theravāda Buddhism and the British Encounter. New York: Oxford University Press.

    Joshi, Priya. 2002. In Another Country: Colonialism, Culture, and the English Novel in India. New York: Columbia University Press.

    Kornbluh, Anna. 2015. “History Repeating.” Paper presented at the V21 Colloquium, Chicago, October 9.

    Lecourt, Sebastian. 2015. “Victorian Studies and the Transnational Present.” V21 blog post. http://v21collective.org/sebastian-lecourt-victorian-studies-and-the-transnational-present/

    —–. 2016a. “Idylls of the Buddh’: Buddhist Modernism and Victorian Poetics in Colonial Ceylon.” PMLA 131 (3): forthcoming.

    —–. 2016b. “That Untravell’d World: The Problem of Thinking Globally in Victorian Studies.” Literature Compass 13 (2): 108-17.

    Levine, Caroline. 2013. “From Nation to Network.” Victorian Studies 55 (4): 647-66.

    —–. 2014. Forms: Whole, Rhythm, Hierarchy, Network. Princeton: Princeton University Press.

    Malalgoda, Kirsiri. 1976. Buddhism in Sinhalese Society, 1750-1900. Berkeley: University of California Press.

    McMahan, David. 2008. The Making of Buddhist Modernism. New York: Oxford University Press.

    Moretti, Franco. 2000. “Conjectures on World Literature.” New Left Review 1 (January-February): 54-68.

    Puchner, Martin. 2006. Poetry of the Revolution: Marx, Manifestos, and the Avant-Gardes. Princeton: Princeton University Press.

    Rosenthal, Jesse. 2015. “Maintenance Work: On Tradition and Development.” Paper presented at the V21 Colloquium, Chicago, October 9.

    Slaughter, Joseph. 2007. Human Rights, Inc.: The World Novel, Narrative Form, and International Law. New York: Fordham University Press.

    Notes

    [i] See Lecourt 2015 and 2016b.

    [ii] For overviews of the revival, consult Malalgoda 1976; Gombrich and Obeyesekere 1988; Seneviratne 1999; Harris 2008; Blackburn 2010.

     

    CONTRIBUTOR’S NOTE

    Sebastian Lecourt is Assistant Professor of English at the University of Houston.  His essays have appeared in PMLA, Victorian Studies, and Victorian Literature and Culture. 

  • Joseph Lavery: Emergency Repairs Are Required On All Our Dams

    Joseph Lavery: Emergency Repairs Are Required On All Our Dams

    by Joseph Lavery

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    The book I’m proposing as a resource for thinking about empire, historical attachment, and V21 method, is Freud’s late paper “Analysis Terminable and Interminable.”[1] It’s an odd text in a lot of ways – and possibly was never, actually, a book in the usual sense of the word (oops): a return to clinical and technical questions after two decades spent exhibiting psychoanalysis as the centerpiece in a variety of theoretical tableaux; and we find Freud in Vienna, less than a year before the Anschluss would force him to flee to London, doubting at last that the utopian payoff of therapy, as he had understood it, was achievable within the analytic scene. The argument, which must be dramatically over-simplified given the time frame, is that transference, once thought by Freud to be a singular and punctual moment, proves all too often reversible; that the possibility of “terminating” an analytic procedure must be considered a practical one, rather than as an apotheosis. The metaphor to which Freud turns to describe the ongoing work of an interminable analysis is that of repairing dams built in one’s infancy; the “dams” (226) are the repressions and sublimations that protect the ego from the disorienting affects of trauma, built poorly by an as-yet-immature ego.

    For all its technicality and complexity, it is a rich and richly deconstructive text that, were there time (and/or a market) for it, one could doubtless demonstrate the mutual constitution of terminable and interminable analyses. For V21, though, what strikes me is the implicit analogy (no doubt one determined in the final analysis by history: Freud’s increasing awareness of his precarity as an Austrian Jew on the verge of imperial annexation) between analytic work and traumatic repetition itself. That is, whereas analysis had initially claimed itself to be a new and distinct kind of repetition that would substitute for, and eventually displace, the symptomatization of trauma; Freud comes to doubt that this kind of repetition was essentially different at all, that the critical “working through” was potentially indistinguishable from the bad repetition against which he had always contrasted it.

    This suggests to me the possibility of a further analogy – one indeed hinted by Freud in the suggestion that not just individuals, but “races and nations” (TK241 may make fit subjects for analysis – which I shall formulate in my own terms: our collective critical and ethical obligation to the past (whether figured as “reparation,” qua Sedgwick, “redemption,” qua Bersani, or as what “unexpectedly appears to man” qua Benjamin) entails, in its very insistence on historical difference, a de-historicizing of the present.[2] An interminable historicism would begin by abolishing the intrinsic distinction between past and present, and conceptualize therapy as an absolute temporality entailing future no less than past: something of this kind is articulated in Paul Saint-Amour’s Tense Future.[3] When Empire is the name we give to that temporality, we are not setting ourselves the task of fixing one or another dam: all our dams need emergency repair: a collective project.

    To stand this interminable historicism on its feet, a question, and an answer to a different question, both concerning the contemporary “legacies” of British slavery: or, precisely, not “legacies,” in so far as that term assumes the death of a past of which we are legatees, but immanent effects. The question concerns Benedict Cumberbatch, and requires one to know (1) that he is arguably the most visible and fetishized standard-bearer for contemporary neo-Victorianism, through his exquisitely mannered performance as Sherlock in the contemporary-ish BBC adaptations; (2) that his ancestor Abram P. Cumberbatch was, following the 1833 Abolition Act, compensated for the loss of 232 formerly enslaved Barbadians, and a name which, to Sherlock fans connotes a gleeful English quaintness, has long served Barbadians as a synecdoche of plantation rule.[4] The question is: when I read Cumberbatch musing about moving to America because “no one minds so much [about class] over there,” and am reminded of C19 narratives of roguish men seeking their fortunes in the tropics for the same reason, is my paranoia located in the past or in the present?[5] And the answer, from Sir Hilary Beckles, publishing in the Jamaica Observer an open letter to the then Prime Minister David Cameron, himself a descendent of slave owners, on the occasion of his state visit to Jamaica:

    “Dear Honourable Prime Minister,

    I join with the resolute and resilient people of Jamaica and their Government in extending to you a warm and glorious welcome to our homeland. We recognise you, Prime Minister, given your family’s long and significant relationship to our country, as an internal stakeholder with historically assigned credentials.

    To us, therefore, you are more than a prime minister. You are a grandson of the Jamaican soil who has been privileged and enriched by your forebears’ sins of the enslavement of our ancestors.”[6]

    Notes

    [1] Freud, Sigmund. 1967. ‘Analysis Terminable and Interminable,’ in The Standard Edition of the Complete Psychological Works of Sigmund Freud, Volume XXIII (1937 – 1939): Moses and Monotheism, An Outline of Psychoanalysis and Other Works. Translated from the German under the General Editorship of James Strachey, in Collab. With Anna Freud, Assisted by Alix Strachey and Alan Tyson, 209 – 253. London. The Hogarth Press and the Institute of Psychoanalysis.

    [2] These three modes of historicist practice nonetheless share, to some degree, a powerfully invested ambivalence concerning the ethics of historical work. See Sedgwick, Eve. ‘Paranoid Reading and Reparative Reading, or, You’re So Paranoid, You Probably Think This Essay Is About You,’ in Touching Feeling: Affect, Pedagogy, Performativity. Durham: Duke University Press, 2003. Bersani, Leo. The Culture of Redemption. Cambridge: Harvard University Press, 1990. Benjamin, Walter. ‘Theses on the Philosophy of History,’ in Illuminations, ed. Hannah Arendt. Translated by Harry Zohn. New York: Schocken Books, 1968. p. 255

    [3] Saint-Amour, Paul K. Tense Future: Modernism, Total War, Encyclopedic Form. Oxford: OUP, 2015.

    [4] The Cumberbatch family history was widely reported around the release of Twelve Years a Slave dir. Steve McQueen (Fox Searchlight, 2013), in which Benedict Cumberbatch played the planter William Prince Ford. See, for example, Adams, Guy. ‘How Benedict Cumberbatch’s family made a fortune from slavery (And why his roles in films like 12 Year a Slave are a bid to atone for their sins).’ Daily Mail, 31 January 2014. http://www.dailymail.co.uk/news/article-2549773/How-Benedict-Cumberbatchs-family-fortune-slavery-And-roles-films-like-12-Years-A-Slave-bid-atone-sins.html

    [5] Benedict Cumberbatch, quoted in Raphael, Amy. “‘I’m definitely middle class… OK maybe I’m upper middle class’: From Sherlock to Star Trek, Benedict Cumberbatch on his meteoric rise to stardom.’ The Mail on Sunday, 27 April 2013. http://www.dailymail.co.uk/home/event/article-2314671/Star-Trek-returns-Benedict-Cumberbatch-boldly-goes-Sherlock-Trekkie.html

    [6] Beckles, Sir Hilary, ‘Letter to David Cameron,’; see ‘Britain has duty to clean up monumental mess of Empire, Sir Hilary tells Cameron,’ Jamaica Observer. Monday, September 28, 2015. http://www.jamaicaobserver.com/news/Britain-has-duty-to-clean-up-monumental-mess-of-Empire–Sir-Hilary-tells-Cameron_19230957

  • Nasser Mufti: Bio-Politics and Greater Britain

    Nasser Mufti: Bio-Politics and Greater Britain

    by Nasser Mufti

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    In his lectures at the Collège de France in 1976, Michel Foucault proclaims that the emergence of bio-politics was “one of the greatest transformations political right underwent in the nineteenth century” because it overlaid the “sovereign’s old right—to take life or let live” with the power to “make live and let die” (Foucault 2003: 241). Bio-politics, along with its critical vocabulary of “state racism,” “regularized life” (81, 245), “fostering life” and “regulations of the population” (1990: 138, 139), have become essential to understanding what Étienne Balibar, with Foucault in mind, calls the “great ‘transition’ between the world of subjection and the world of right and discipline” (Balibar 1991: 55).

    Overlooked by most students of Foucault’s critique of sovereignty is Morley Roberts’s treatise, Bio-politics: An Essay in Physiology and Politics of the Social and Somatic Organism. Written in 1912 and published in 1938, the book argues for the state’s re-invention as a biological entity. As Roberts explains in the preface, “It is not to be expected that the politician should apply himself to the study of the endocrine organs, or ductless glands of the body, but a little knowledge of them might help him understand more perfectly the nature of his own difficulties in relation with the organized bodies of any kind—from empires and nations down to the turbulent committees among his own constituents […] He might even hear of the Struggle of the Parts and might possibly learn that I had reasonably described the social life of the body as a state of hostile symbiosis” (Roberts 1938: xiii). Roberts’s idea of the state, as it turns out, is imperial through and through. And its vitalism extends from the domestic squabbles of “turbulent committees” to the imperial peripheries. This global polity is seemingly under permanent duress. For it is hard not to read what he calls the “hostile symbiosis” or the “Struggle of the Parts” as the rise of anti-colonial movements in the peripheries (Ireland, India, South Africa, for example), which in 1912 were increasingly crystalizing as nationalist projects that contested British imperial rule.

    Roberts’s text is the outcome of a peculiar intellectual trajectory. He worked for the India Office in the late-1870s, and travelled through much of the British empire in the 1880s, spending much of his time in Australia, Canada, South Africa and the United States. Between 1886 and 1906, Roberts published over two dozen novels, travelogues, numerous short stories, and a biography of George Gissing, who was a friend of his from college. Not unlike the imperial adventure tales of writers like H. Rider Haggard, G. A. Henty, William Henry Hudson and Robert Louis Stevenson, the colonies loom especially large in Roberts’s tales. Roberts stopped writing fiction in the 1910s, focusing instead of publishing texts like Bio-Politics, including also Warfare in the Human Body (1922) and The Behavior of Nations (1941).

    Roberts’s language in Bio-Politics oscillates between the registers of biology and politics so much so that, according to him, nothing is lost in translation. Biological forms map perfectly onto geopolitical forms. The structures that organize an organism’s life, it turns out, are the same as those of politics. The effect of this formal conjuncture is borderline absurd prose. To take one example, in a chapter on “Politics and Colonial Protozoa,” Roberts makes the analogy between Proterospongia Haeckelii and imperial geopolitics. He describes Proterospongia Haeckelii as “a primitive sponge” where “there can be seen on the gelatinous surface of the colony cup-shaped flagellate cells, while, in the interior, there are only non-flagellate amoebae.” On this gelatinous continent are two types of organisms, one at the extremities of the “gelatinous surface,” and the other inside of it. “But these flagellates are not fixed,” Roberts explains, “they are capable of migrating to the surface, where they soon become cup-shaped and flagellate and take up the functions of those they displaced. These again migrate from the surface and return for a time to the primitive amoeba form” (108). Roberts uses the example to argue that the British empire not be seen as a static territory, but as a dynamic relation. The spongy gelatinous “continent” is not a fixed geographic category for Roberts, but is mobile and modular, capable of inverting its coordinates so that interiors become its exteriors, intra-national becomes extra-national, metropole becomes colony, and vice versa.

    What kind of historical context makes it possible for someone like Roberts to conflate the metropolitan center with the periphery, and moreover, conflate these two radically different schemas with no limits? One answer, it seems to me, is “Greater Britain.” During the British empire’s most ambitious years towards the end of the nineteenth century, Britain was often said to have formed an imperial nation-state with its colonies. J. R. Seeley, for example, celebrated the impact of technology on the British empire in a decidedly vitalist key: “Science has given to the political organism a new circulation, which is steam, and a new nervous system, which is electricity” (Seeley 1914: 86-7). In “Saxondom,” Seeley contemplates, “Canada and Australia are to us as Kent and Cornwall,” suggesting a transformation of geographic distance into domestic proximity in a way not unlike Roberts’s Haeckelii (63). That Roberts (and to a lesser degree Seeley) make a space beyond the bounds of the empire unthinkable in the very years Britain’s colonies were first declaring their independence from Britain tells us something about why the geopolitical terrain of Bio-Politics is as mutable and elastic as it is. While Roberts’s turn to biology might seem to “de-center” the British empire (in ways not dissimilar to how scholars of empire have turned to the language of networks, webs and systems), the politics behind his biological tropes is rooted in a familiar imperial paradigm.

    But one thing is certain: Roberts makes it impossible to think of bio-politics outside of an imperial milieu. Scholars like Ann Laura Stoler and Achille Mbembe have in their own ways extended, adapted and decentered Foucault’s genealogy of bio-politics from Europe to the peripheries (see Stoler 1995; Mbembe 2003). But Roberts offers another way to approach the question of bio-politics—namely, through the triumphant, jingoistic discourse of Greater Britain and its other, anti-colonial nationalism.

    References

    Étienne Balibar. 1991. “Citizen Subject.” In Who Comes After the Subject?, edited by Eduardo Cadava, Peter Connor, Jean-Luc Nancy. London: Routledge.

    Foucault, Michel. 2003. “Society Must Be Defended”: Lectures at the Collège de France, 1975-6. Translated by David Macey. New York: Picador.

    Foucault, Michel. 1990. An Introduction. Vol. 1 of The History of Sexuality. Translated by Robert Hurley. New York: Vintage Books.

    Mbembe, Achille. 2003. “Necropolitics.” Public Culture 15, no. 1: 11-40.

    Roberts, Morley. 1938. Bio-Politics: An Essay in the Physiology, Pathology and Politics of the Social and Somatic Organism. London: Dent.

    Seeley, J. R. 1914. The Expansion of England: Two Courses of Lectures London: Macmillan and Co..

    Stoler, Laura Ann. 1995. Race and the Education of Desire: Foucault’s History of Sexuality and the Colonial Order of Things. Durham: Duke University Press.

  • Mary L. Mullen: Empire and Unfielding: Charles Kickham’s Knocknagow: Or, the Homes of Tipperary

    Mary L. Mullen: Empire and Unfielding: Charles Kickham’s Knocknagow: Or, the Homes of Tipperary

    by Mary L. Mullen

    This essay was peer-reviewed by the editorial board of b2o: an online journal.

    The V21 manifesto (V21 Collective 2015) asserts, “We must break accepted frames.” Focusing on Victorian empire raises the question, to what end?  Breaking accepted frames can spark innovation and expand the geographic and temporal scale of the field, but these innovations and expansions might reproduce the very Victorian imperial formations that we study.  As Roderick Ferguson’s recent history of the interdisciplines warns, acts of unfielding are often archived within the university in ways that obscure the “ruptural possibilities of modes of difference” (2012, 18).[1] When breaking frames or considering acts of unfielding, I suggest that we should work towards anti-colonial ends.

    Turning to nineteenth-century Ireland—a place that has a complicated relationship to both empire and the field of Victorian studies—one of the “accepted frames” to consider breaking is our emphasis on the book itself.[2]  After all, in 1841 only 27% of Ireland’s population could read and write (Graff 1987, 337). For this reason, although scholars like Kate Trumpener (1997, 16) and James Buzard (2005, 41) persuasively demonstrate the ways in which Irish literature shaped and was shaped by English fiction of the period, it is also important to remember that the book was not the primary form of Irish cultural authority or public discourse.

    Charles Kickham’s immensely popular Knocknagow: Or, the Homes of Tipperary (1879) highlights the difficulty of anti-colonial unfielding even when focusing on questions of colonialism and empire. Kickham started writing novels while imprisoned for his role in the Fenian conspiracy in 1865, suggesting that his writing was intimately connected to his anti-colonial politics. Knocknagow, his most famous work, was published in serial form in periodicals in both Ireland and New York and was on Ireland’s bestseller list as recently as 1978.  But today, few Irish people or Irish studies scholars read the book.[3] Knocknagow’s longstanding popularity resulted from the way it incorporates alternative forms of cultural authority—music, storytelling, athletic competitions, embodied memory. And yet, the ways in which Knocknagow circulates as a book shows how imperial authority reproduces itself in colonial and postcolonial states.

    Making the case that Knocknagow is a “great Irish Novel”—even better than Joyce!—the Irish sportswriter, Con Houlihan, suggests that the novel doesn’t hold together as a novel (2007, 20). Houlihan (2007, 20) celebrates this “great basket” of a novel not because of its narrative unity or coherence but because it captures the contradictory experiences of everyday Irish life. Sometimes, the narrative actually gets in the way of the energy of the story. Kickham interrupts a lively description of a game of hurling to solve a mystery surrounding one of the characters.  By the time he returns to the hurling match, the reader has almost forgotten that it is taking place.  Tellingly, readers remember the description of the hurling match rather than the narrative it interrupts: the Gaelic Athletic Association later includes this description in their manual, and sportswriters like Houlihan continue to draw upon Kickham when recounting particularly exciting matches (Valente 2011, 65).

    But, the circulation of this book often blunts the politics of the novel as these alternative forms of cultural authority are used to reinforce the aesthetic standards of the British colonial state instead of questioning them.  Working to unify the novel’s discordant forms, readers take up the novel’s sentimentalism as a form of nostalgia and conveniently forget its criticisms of British law and state formations. The novel’s nostalgia is actually quite complicated: it reinforces a sentimentalized pastoral ideal but also recalls graphic state violence that the community remembers but the British state has already forgotten. Staging a conflict between official state history and native Irish remembering, Knocknagow tends to be taken up in ways that allow native Irish remembering to achieve the aesthetic authority of official history.  It was celebrated for teaching Irish youth proper morality (“Charles Kickham’s Career” 1928, 5), for providing a thoroughly Irish counterpart to Dickens’s “English Christmas” (“Leader Page Parade, 1954), and as the appropriate subject matter for English classes in Ireland well into the twentieth century (Fitzpatrick 1973, 12). As a result, the music, hurling, Christmas celebrations, and storytelling that Kickham lovingly portrays become timelessly embodied in the newly independent Irish state while Kickham’s anti-colonial politics that question official state history are forgotten.[4]

    By paying attention to these contradictions—between the novel’s politics and the politics of its circulation, the novel’s competing forms and the easily portable forms that travel beyond the novel—we can recognize how empire reproduces itself, but also, the forms of difference at odds with this reproduction. Knocknagow shows that returning to what has been forgotten—in this case, the novel’s anti-colonial politics and its discordant forms—can break accepted frames by reactivating the ruptural possibilities of difference.

    References

    Buzard, James. 2005. Disorienting Fiction: The Autoethnographic Work of Nineteenth-Century British Novels. Princeton: Princeton University Press.

    “Charles Kickham’s Career.” 1928. Irish Independent, August 30.

    Ferguson, Roderick. 2012. The Reorder of Things: The University and its Pedagogies of Minority Difference. Minneapolis: University of Minnesota Press.

    Fitzpatrick, Sean. 1973. “English Books for Irish Children.” Irish Independent, September 12.

    Graff, Harvey J. 1987. Legacies of Literacies in Western Culture and Society. Bloomington: Indiana University Press.

    Houlihan, Con. 2007. “Kickham’s work up there with the great Irish novels.” Sunday Independent, October 28.

    Joshi, Priya. 2011. “Globalizing Victorian Studies.” The Yearbook of English Studies, 41:2: 20-40.

    Kiberd, Declan. 1995. Inventing Ireland: The Literature of the Modern Nation. Cambridge: Harvard University Press.

    “Leader Page Parade.” 1954 Irish Independent, December 22.

    Martin, Amy. 2012. Alter-nations: Nationalisms, Terror, and the State in Nineteenth-Century Britain and Ireland. Columbus: The Ohio State University Press.

    Murphy, James. H. 2011. Irish Novelists and the Victorian Age. Oxford: Oxford University Press.

    Nolan, Emer. 2007. Catholic Emancipations: Irish Fiction from Thomas Moore to James Joyce. Syracuse: Syracuse University Press.

    Trumpener, Katie. 1997. Bardic Nationalism: The Romantic Novel and The British Empire. Princeton: Princeton University Press.

    V21 Collective. 2015. “Manifesto of the V21 Collective: Ten Theses.” http://v21collective.org/manifesto-of-the-v21-collective-ten-theses/ (accessed 2/10/2016).

    Valente, Joseph. 2011. The Myth of Manliness in Irish National Culture, 1880-1922. Urbana, Chicago and Springfield: University of Illinois Press.

    Notes

    [1] Priya Joshi (2011, 21) makes a similar point, arguing that transnational work in Victorian studies often “preserved the sense that the Victorian metropolis was hegemonic.”

    [2] As Declan Kiberd (1995, 5) argues, Irish people were “both exponents and victims of British imperialism.”

    [3] Emer Nolan’s (2007, 103-24) and James H. Murphy’s (2011, 119-47) work are notable exceptions.

    [4] Examining how Fenianism is remembered (and forgotten) in the twentieth century, Amy Martin (2012, 161) contends that Fenian politics “represent a loss that haunts Irish politics.” I suggest that Kickham’s forgotten anti-colonial politics is part of this larger structure of historical amnesia and remembrance.

     

    CONTRIBUTOR’S NOTE

    Mary L. Mullen is Assistant Professor of English and faculty member in the Irish Studies program at Villanova University.  Her essays have appeared in Victorian Poetry, Victoriographies, Eighteenth-Century Fiction, and elsewhere.