b2o: boundary 2 online

Tag: digital culture

  • What Drives Automation?

    What Drives Automation?

    glass-cagea review of Nicholas Carr, The Glass Cage: Automation and Us (W.W. Norton, 2014)
    by Mike Bulajewski
    ~

    Debates about digital technology are often presented in terms of stark polar opposites: on one side, cyber-utopians who champion the new and the cutting edge, and on the other, cyber-skeptics who hold on to obsolete technology. The framing is one-dimensional in the general sense that it is superficial, but also in a more precise and mathematical sense that it implicitly treats the development of technology as linear. Relative to the present, there are only two possible positions and two possible directions to move; one can be either for or against, ahead or behind.[1]

    Although often invoked as a prelude to transcending the division and offering a balanced assessment, in describing the dispute in these pro or con terms one has already betrayed one’s orientation, tilting the field against the critical voice by assigning it an untenable position. Criticizing a new technology is misconstrued as a simple defense of the old technology or of no technology, which turns legitimate criticism into mere conservative fustiness, a refusal to adapt and a failure to accept change.

    Few critics of technology match these descriptions, and those who do, like the anarcho-primitivists who claim to be horrified by contemporary technology, nonetheless accede to the basic framework set by technological apologists. The two sides disagree only on the preferred direction of travel, making this brand of criticism more pro-technology than it first appears. One should not forget that the high-tech futurism of Silicon Valley is supplemented by the balancing counterweight of countercultural primitivism, with Burning Man expeditions, technology-free Waldorf schools for children of tech workers, spouses who embrace premodern New Age beliefs, romantic agrarianism, and restorative digital detox retreats featuring yoga and meditation. The diametric opposition between pro- and anti-technology is internal to the technology industry, perhaps a symptom of the repression of genuine debate about the merits of its products.

    ***

    Nicholas Carr’s most recent book, The Glass Cage: Automation and Us, is a critique of the use of automation and a warning of its human consequences, but to conclude, as some reviewers have, that he is against automation or against technology as such is to fall prey to this one-dimensional fallacy.[2]

    The book considers the use of automation in areas like medicine, architecture, finance, manufacturing and law, but it begins with an example that’s closer to home for most of us: driving a car. Transportation and wayfinding are minor themes throughout the book, and with Google and large automobile manufacturers promising to put self-driving cars on the street within a decade, the impact of automation in this area may soon be felt in our daily lives like never before. Early in the book, we are introduced to problems that human factors engineers working with airline autopilot systems have discovered and may be forewarnings of a future of the unchecked automating of transportation.

    Carr discusses automation bias—the tendency for operators to assume the system is correct and external signals that contradict it are wrong—and the closely related problem of automation complacency, which occurs when operators assume the system is infallible and so abandon their supervisory role. These problems have been linked to major air disasters and are behind less-catastrophic events like oblivious drivers blindly following their navigation systems into nearby lakes or down flights of stairs.

    The chapter dedicated to deskilling is certain to raise the ire of skeptical readers because it begins with an account of the negative impact of GPS technology on Inuit hunters who live in the remote northern reaches of Canada. As GPS devices proliferated, hunters lost what a tribal elder describes as “the wisdom and knowledge of the Inuit”: premodern wayfinding methods that rely on natural phenomena like wind, stars, tides and snowdrifts to navigate. Inuit wayfinding skills are truly impressive. The anthropologist Claudio Aporta reports traveling with a hunter across twenty square kilometers of flat featureless land as he located seven fox traps that he had never seen before, set by his uncle twenty five years prior. These talents have been eroded as Inuit hunters have adopted GPS devices that seem to do the job equally well, but have the unexpected side effect of increasing injuries and deaths as hunters succumb to equipment malfunctions and the twin perils of automation complacency and bias.

    Laboring under the misconceptions of the one-dimensional fallacy, it would be natural to take this as a smoking gun of Carr’s alleged anti-technology perspective and privileging of the premodern, but the closing sentences of the chapter point us away from that conclusion:

    We ignore the ways that software programs and automated systems might be reconfigured so as not to weaken our grasp on the world but to strengthen it. For, as human factors researchers and other experts on automation have found, there are ways to break the glass cage without losing the many benefits computers grant us. (151)

    These words segue into the following chapter, where Carr identifies the dominant philosophy that designs automation technologies to inadvertently produce problems that he identified earlier: technology-centered automation. This approach to design is distrustful of humans, perhaps even misanthropic. It views us as weak, inefficient, unreliable and error-prone, and seeks to minimize our involvement in the work to be done. It institutes a division of labor between human and machine that gives the bulk of the work over to the machine, only seeking human input in anomalous situations. This philosophy is behind modern autopilot systems that hand off control to human pilots for only a few minutes in a flight.

    The fundamental argument of the book is that this design philosophy can lead to undesirable consequences. Carr seeks an alternative he calls human-centered automation, an approach that ensures the human operator remains engaged and alert. Autopilot systems designed with this philosophy might return manual control to the pilots at irregular intervals to ensure they remain vigilant and practice their flying skills. It could provide tactile feedback of its operations so that pilots are involved in a visceral way rather than passively monitoring screens. Decision support systems like those used in healthcare could take a secondary role of reviewing and critiquing a decision made by a doctor made rather than the other way around.

    The Glass Cage calls for a fundamental shift in how we understand error. Under the current regime, an error is an inefficiency or an inconvenience, to be avoided at all costs. As defined by Carr, a human-centered approach to design treats error differently, viewing it as an opportunity for learning. He illustrates this with a personal experience of repeatedly failing a difficult mission in the video game Red Dead Redemption, and points to the satisfaction of finally winning a difficult game as an example of what is lost when technology is designed to be too easy. He offers video games as a model for the kinds of technologies he would like to see: tools that engage us in difficult challenges, that encourage us to develop expertise and to experience flow states.

    But Carr has an idiosyncratic definition of human-centered design which becomes apparent when he counterposes his position against the prominent design consultant Peter Merholz. Echoing premises almost universally adopted by designers, Merholz calls for simple, frictionless interfaces and devices that don’t require a great deal of skill, memorization or effort to operate. Carr objects that that eliminates learning, skill building and mental engagement—perhaps a valid criticism, but it’s strange to suggest that this reflects a misanthropic technology-centered approach.

    A frequently invoked maxim of human-centered design is that technology should adapt to people, rather than people adapting to technology. In practice, the primary consideration is helping the user achieve his or her goal as efficiently and effectively as possible, removing unnecessary obstacles and delays that stand in the way. Carr argues for the value of challenges, difficulties and demands placed on users to learn and hone skills, all of which fall under the prohibited category of people adapting to technology.

    In his example of playing Red Dead Redemption, Carr prizes the repeated failure and frustration before finally succeeding at the game. From the lens of human-centered design, that kind of experience is seen as a very serious problem that should be eliminated quickly, which is probably why this kind of design is rarely employed at game studios. In fact, it doesn’t really make sense to think of a game player as having a goal, at least not from the traditional human-centered standpoint. The driver of a car has a goal: to get from point A to point B; a Facebook user wants to share pictures with friends; the user of a word processor wants to write a document; and so on. As designers, we want to make these tasks easy, efficient and frictionless. The most obvious way of framing game play is to say that the player’s goal is to complete the game. We would then proceed to remove all obstacles, frustrations, challenges and opportunities for error that stand in the way so that they may accomplish this goal more efficiently, and then there would be nothing left for them to do. We would have ruined the game.

    This is not necessarily the result of a misanthropic preference for technology over humanity, though it may be. It is also the likely outcome of a perfectly sincere and humanistic belief that we shouldn’t inconvenience the user with difficulties that stand in the way of their goal. As human factors researcher David Woods puts it, “The road to technology-centered systems is paved with human-centered intentions,”[3] a phrasing which suggests that these two philosophies aren’t quite as distinct as Carr would have us believe.

    Carr’s vision of human-centered design differs markedly from contemporary design practice, which stresses convenience, simplicity, efficiency for the user and ease of use. In calling for less simplicity and convenience, he is in effect critical of really existing human-centeredness, and that troubles any reading of The Glass Cage that views it a book about restoring our humanity in a world driven mad by machines.

    It might be better described as a book about restoring one conception of humanity in a world driven mad by another. It is possible to argue that the difference between the two appears in psychoanalytic theory as the difference between drive and desire. The user engages with a technology in order to achieve a goal because they perceive themselves as lacking something. Through the use of this tool, they believe they can regain it and fill in this lack. It follows that designers ought to help the user achieve their goal—to reach their desire—as quickly and efficiently as possible because this will satisfy them and make them happy.

    But the insight of psychoanalysis is that lack is ontological and irreducible, it cannot be filled in any permanent way because any concrete lack we experience is in fact metonymic for a constitutive lack of being. As a result, as desiring subjects we are caught in an endless loop of seeking out that object of desire, feeling disappointed when we find it because it didn’t fulfill our fantasies and then finding a new object to chase. The alternative is to shift from desire to drive, turning this failure into a triumph. Slavoj Žižek describes drive as follows: “the very failure to reach its goal, the repetition of this failure, the endless circulation around the object, generates a satisfaction of its own.”[4]

    This satisfaction is perhaps what Carr aims at when he celebrates the frustrations and challenges of video games and of work in general. That video games can’t be made more efficient without ruining them indicates that what players really want is for their goal to be thwarted, evoking the psychoanalytic maxim that summarizes the difference between desire and drive: from the missing/lost object, to loss itself as an object. This point is by no means tangential. Early on, Carr introduces the concept of miswanting, defined as the tendency to desire what we don’t really want and won’t make us happy—in this case, leisure and ease over work and challenge. Psychoanalysts holds that all human wanting (within the register of desire) is miswanting. Through fantasy, we imagine an illusory fullness or completeness of which actual experience always falls short.[5]

    Carr’s revaluation of challenge, effort and, ultimately, dissatisfaction cannot represent a correction of the error of miswanting­–of rediscovering the true source of pleasure and happiness in work. Instead, it radicalizes the error: we should learn to derive a kind of satisfaction from our failure to enjoy. Or, in the final chapter, as Carr says of the farmer in Robert Frost’s poem Mowing, who is hard at work and yet far from the demands of productivity: “He’s not seeking some greater truth beyond the work. The work is the truth.”

    ***

    Nicholas Carr has a track record of provoking designers to rethink their assumptions. With The Shallows, along with other authors making related arguments, he influenced software developers to create a new class of tools that cut off the internet, eliminate notifications or block social media web sites to help us concentrate. Starting with OS X Lion in 2011, Apple began offering a full screen mode that hides distracting interface elements and background windows from inactive applications.

    What transformative effects could The Glass Cage have on the way software is designed? The book certainly offers compelling reasons to question whether ease of use should always be paramount. Advocates for simplicity are rarely challenged, but they may now find themselves facing unexpected objections. Software could become more challenging and difficult to use—not in the sense of a recalcitrant WiFi router that emits incomprehensible error codes, but more like a game. Designers might draw inspiration from video games, perhaps looking to classics like the first level of Super Mario Brothers, a masterpiece of level design that teaches the fundamental rules of the game without ever requiring the player to read the manual or step through a tutorial.

    Everywhere that automation now reigns, new possibilities announce themselves. A spell checker might stop to teach spelling rules, or make a game out of letting the user take a shot at correcting mistakes it has detected. What if there was a GPS navigation device that enhanced our sense of spatial awareness rather than eroding it, that engaged our attention on to the road rather than let us tune out. Could we build an app that helps drivers maintain good their skills by challenging them to adopt safer and more fuel-efficient driving techniques?

    Carr points out that the preference for easy-to-use technologies that reduce users’ engagement is partly a consequence of economic interests and cost reduction policies that profit from the deskilling and reduction of the workforce, and these aren’t dislodged simply by pressing for new design philosophies. But to his credit, Carr has written two best-selling books aimed at the general interest reader on the fairly obscure topic of human-computer interaction. User experience designers working in the technology industry often face an uphill battle in trying to build human-centered products (however that is defined). When these matters attract public attention and debate, it makes their job a little easier.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org. He has previously written about the Spike Jonze film Her for The b2 Review Digital Studies section.

    Back to the essay

    _____

    [1] Differences between individual technologies are ignored and replaced by the monolithic master category of Technology. Jonah Lehrer’s review of Nicholas Carr’s 2010 book The Shallows in the New York Times exemplifies such thinking. Lehrer finds contradictory evidence against Carr’s argument that the internet is weakening our mental faculties in scientific studies that attribute cognitive improvements to playing video games, a non sequitur which gains meaning only by subsuming these two very different technologies under a single general category of Technology. Evgeny Morozov is one of the sharpest critics of this tendency. Here one is reminded of his retort in his article “Ghosts in the Machine” (2013): “That dentistry has been beneficial to civilization tells us very little about the wonders of data-mining.”

    [2] There are a range of possible causes for this constrictive linear geometry: a tendency to see a progressive narrative of history; a consumerist notion of agency which only allows shoppers to either upgrade or stick with what they have; or the oft-cited binary logic of digital technology. One may speculate about the influence of the popular technology marketing book by Geoffrey A. Moore, Crossing the Chasm (2014) whose titular chasm is the gap between the elite group of innovators and early adopters—the avant-garde—and the recalcitrant masses bringing up the rear who must be persuaded to sign on to their vision.

    [3] David D. Woods and David Tinapple (1999). “W3: Watching Human Factors Watch People at Work.” Proceedings of the 43rd Annual Meeting of the Human Factors and Ergonomics Society (1999).

    [4] Slavoj Žižek, The Parallax View (2006), 63.

    [5] The cultural and political implications of this shift are explored at length in Todd McGowan’s two books The End of Dissatisfaction: Jacques Lacan and the Emerging Society of Enjoyment (2003) and Enjoying What We Don’t Have: The Political Project of Psychoanalysis (2013).

  • The Reticular Fallacy

    The Reticular Fallacy

    By Alexander R. Galloway
    ~

    We live in an age of heterogenous anarchism. Contingency is king. Fluidity and flux win over solidity and stasis. Becoming has replaced being. Rhizomes are better than trees. To be political today, one must laud horizontality. Anti-essentialism and anti-foundationalism are the order of the day. Call it “vulgar ’68-ism.” The principles of social upheaval, so associated with the new social movements in and around 1968, have succeed in becoming the very bedrock of society at the new millennium.

    But there’s a flaw in this narrative, or at least a part of the story that strategically remains untold. The “reticular fallacy” can be broken down into two key assumptions. The first is an assumption about the nature of sovereignty and power. The second is an assumption about history and historical change. Consider them both in turn.

    (1) First, under the reticular fallacy, sovereignty and power are defined in terms of verticality, centralization, essence, foundation, or rigid creeds of whatever kind (viz. dogma, be it sacred or secular). Thus the sovereign is the one who is centralized, who stands at the top of a vertical order of command, who rests on an essentialist ideology in order to retain command, who asserts, dogmatically, unchangeable facts about his own essence and the essence of nature. This is the model of kings and queens, but also egos and individuals. It is what Barthes means by author in his influential essay “Death of the Author,” or Foucault in his “What is an Author?” This is the model of the Prince, so often invoked in political theory, or the Father invoked in psycho-analytic theory. In Derrida, the model appears as logos, that is, the special way or order of word, speech, and reason. Likewise, arkhe: a term that means both beginning and command. The arkhe is the thing that begins, and in so doing issues an order or command to guide whatever issues from such a beginning. Or as Rancière so succinctly put it in his Hatred of Democracy, the arkhe is both “commandment and commencement.” These are some of the many aspects of sovereignty and power as defined in terms of verticality, centralization, essence, and foundation.

    (2) The second assumption of the reticular fallacy is that, given the elimination of such dogmatic verticality, there will follow an elimination of sovereignty as such. In other words, if the aforementioned sovereign power should crumble or fall, for whatever reason, the very nature of command and organization will also vanish. Under this second assumption, the structure of sovereignty and the structure of organization become coterminous, superimposed in such a way that the shape of organization assumes the identical shape of sovereignty. Sovereign power is vertical, hence organization is vertical; sovereign power is centralized, hence organization is centralized; sovereign power is essentialist, hence organization, and so on. Here we see the claims of, let’s call it, “naïve” anarchism (the non-arkhe, or non foundation), which assumes that repressive force lies in the hands of the bosses, the rulers, or the hierarchy per se, and thus after the elimination of such hierarchy, life will revert so a more direct form of social interaction. (I say this not to smear anarchism in general, and will often wish to defend a form of anarcho-syndicalism.) At the same time, consider the case of bourgeois liberalism, which asserts the rule of law and constitutional right as a way to mitigate the excesses of both royal fiat and popular caprice.

    reticular connective tissue
    source: imgkid.com

    We name this the “reticular” fallacy because, during the late Twentieth Century and accelerating at the turn of the millennium with new media technologies, the chief agent driving the kind of historical change described in the above two assumptions was the network or rhizome, the structure of horizontal distribution described so well in Deleuze and Guattari. The change is evident in many different corners of society and culture. Consider mass media: the uni-directional broadcast media of the 1920s or ’30s gradually gave way to multi-directional distributed media of the 1990s. Or consider the mode of production, and the shift from a Fordist model rooted in massification, centralization, and standardization, to a post-Fordist model reliant more on horizontality, distribution, and heterogeneous customization. Consider even the changes in theories of the subject, shifting as they have from a more essentialist model of the integral ego, however fraught by the volatility of the unconscious, to an anti-essentialist model of the distributed subject, be it postmodernism’s “schizophrenic” subject or the kind of networked brain described by today’s most advanced medical researchers.

    Why is this a fallacy? What is wrong about the above scenario? The problem isn’t so much with the historical narrative. The problem lies in an unwillingness to derive an alternative form of sovereignty appropriate for the new rhizomatic societies. Opponents of the reticular fallacy claim, in other words, that horizontality, distributed networks, anti-essentialism, etc., have their own forms of organization and control, and indeed should be analyzed accordingly. In the past I’ve used the concept of “protocol” to describe such a scenario as it exists in digital media infrastructure. Others have used different concepts to describe it in different contexts. On the whole, though, opponents of the reticular fallacy have not effectively made their case, myself included. The notion that rhizomatic structures are corrosive of power and sovereignty is still the dominant narrative today, evident across both popular and academic discourses. From talk of the “Twitter revolution” during the Arab Spring, to the ideologies of “disruption” and “flexibility” common in corporate management speak, to the putative egalitarianism of blog-based journalism, to the growing popularity of the Deleuzian and Latourian schools in philosophy and theory: all of these reveal the contemporary assumption that networks are somehow different from sovereignty, organization, and control.

    To summarize, the reticular fallacy refers to the following argument: since power and organization are defined in terms of verticality, centralization, essence, and foundation, the elimination of such things will prompt a general mollification if not elimination of power and organization as such. Such an argument is false because it doesn’t take into account the fact that power and organization may inhabit any number of structural forms. Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.

    Consider the kind of methods and concepts still popular in critical theory today: contingency, heterogeneity, anti-essentialism, anti-foundationalism, anarchism, chaos, plasticity, flux, fluidity, horizontality, flexibility. Such concepts are often praised and deployed in theories of the subject, analyses of society and culture, even descriptions of ontology and metaphysics. The reticular fallacy does not invalidate such concepts. But it does put them in question. We can not assume that such concepts are merely descriptive or neutrally empirical. Given the way in which horizontality, flexibility, and contingency are sewn into the mode of production, such “descriptive” claims are at best mirrors of the economic infrastructure and at worse ideologically suspect. At the same time, we can not simply assume that such concepts are, by nature, politically or ethically desirable in themselves. Rather, we ought to reverse the line of inquiry. The many qualities of rhizomatic systems should be understood not as the pure and innocent laws of a newer and more just society, but as the basic tendencies and conventional rules of protocological control.


    _____

    Alexander R. Galloway is a writer and computer programer working on issues in philosophy, technology, and theories of mediation. Professor of Media, Culture, and Communication at New York University, he is author of several books and dozens of articles on digital media and critical theory, including Protocol: How Control Exists after Decentralization (MIT, 2006), Gaming: Essays in Algorithmic Culture (University of Minnesota, 2006); The Interface Effect (Polity, 2012), and most recently Laruelle: Against the Digital (University of Minnesota, 2014), reviewed here earlier in 2014. Galloway has recently been writing brief notes on media and digital culture and theory at his blog, on which this post first appeared.

    Back to the essay

  • Teacher Wars and Teaching Machines

    Teacher Wars and Teaching Machines

    teacher warsa review of Dana Goldstein, The Teacher Wars: A History of America’s Most Embattled Profession (Doubleday, 2014)
    by Audrey Watters
    ~

    Teaching is, according to the subtitle of education journalist Dana Goldstein’s new book, “America’s Most Embattled Profession.” “No other profession,” she argues, ”operates under this level of political scrutiny, not even those, like policing or social work, that are also tasked with public welfare and are paid for with public funds.”

    That political scrutiny is not new. Goldstein’s book The Teacher Wars chronicles the history of teaching at (what has become) the K–12 level, from the early nineteenth century and “common schools” — that is, before before compulsory education and public school as we know it today — through the latest Obama Administration education policies. It’s an incredibly well-researched book that moves from the feminization of the teaching profession to the recent push for more data-driven teacher evaluation, observing how all along the way, teachers have been deemed ineffectual in some way or another — failing to fulfill whatever (political) goals the public education system has demanded be met, be those goals be economic, civic, or academic.

    As Goldstein describes it, public education is a labor issue; and it has been, it’s important to note, since well before the advent of teacher unions.

    The Teacher Wars and Teaching Machines

    To frame education this way — around teachers and by extension, around labor — has important implications for ed-tech. What happens if we examine the history of teaching alongside the history of teaching machines? As I’ve argued before, the history of public education in the US, particularly in the 20th century, is deeply intertwined with various education technologies – film, TV, radio, computers, the Internet – devices that are often promoted as improving access or as making an outmoded system more “modern.” But ed-tech is frequently touted too as “labor-saving” and as a corrective to teachers’ inadequacies and inefficiencies.

    It’s hardly surprising, in this light, that teachers have long looked with suspicion at new education technologies. With their profession constantly under attack, many teacher are worried no doubt that new tools are poised to replace them. Much is said to quiet these fears, with education reformers and technologists insisting again and again that replacing teachers with tech is not the intention.

    And yet the sentiment of science fiction writer Arthur C. Clarke probably does resonate with a lot of people, as a line from his 1980 Omni Magazine article on computer-assisted instruction is echoed by all sorts of pundits and politicians: “Any teacher who can be replaced by a machine should be.”

    Of course, you do find people like former Washington DC mayor Adrian Fenty – best known arguably via his school chancellor Michelle Rhee – who’ll come right out and say to a crowd of entrepreneurs and investors, “If we fire more teachers, we can use that money for more technology.”

    So it’s hard to ignore the role that technology increasingly plays in contemporary education (labor) policies – as Goldstein describes them, the weakening of teachers’ tenure protections alongside an expansion of standardized testing to measure “student learning,” all in the service finding and firing “bad teachers.” The growing data collection and analysis enabled by schools’ adoption of ed-tech feeds into the politics and practices of employee surveillance.

    Just as Goldstein discovered in the course of writing her book that the current “teacher wars” have a lengthy history, so too does ed-tech’s role in the fight.

    As Sidney Pressey, the man often credited with developing the first teaching machine, wrote in 1933 (from a period Goldstein links to “patriotic moral panics” and concerns about teachers’ political leanings),

    There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the school will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process. There will be many labor-saving schemes and devices, and even machines — not at all for the mechanizing of education but for the freeing of teacher and pupil from the educational drudgery and incompetence.

    Or as B. F. Skinner, the man most associated with the development of teaching machines, wrote in 1953 (one year before the landmark Brown v Board of Education),

    Will machines replace teachers? On the contrary, they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. He may teach more students than heretofore — this is probably inevitable if the world-wide demand for education is to be satisfied — but he will do so in fewer hours and with fewer burdensome chores.

    These quotations highlight the longstanding hopes and fears about teaching labor and teaching machines; they hint too at some of the ways in which the work of Pressey and Skinner and others coincides with what Goldstein’s book describes: the ongoing concerns about teachers’ politics and competencies.

    The Drudgery of School

    One of the things that’s striking about Skinner and Pressey’s remarks on teaching machines, I think, is that they recognize the “drudgery” of much of teachers’ work. But rather than fundamentally change school – rather than ask why so much of the job of teaching entails “burdensome chores” – education technology seems more likely to offload that drudgery to machines. (One of the best contemporary examples of this perhaps: automated essay grading.)

    This has powerful implications for students, who – let’s be honest – suffer through this drudgery as well.

    Goldstein’s book doesn’t really address students’ experiences. Her history of public education is focused on teacher labor more than on student learning. As a result, student labor is missing from her analysis. This isn’t a criticism of the book; and it’s not just Goldstein that does this. Student labor in the history of public education remains largely under-theorized and certainly underrepresented. Cue AFT president Al Shanker’s famous statement: “Listen, I don’t represent children. I represent the teachers.”

    But this question of student labor seems to be incredibly important to consider, particularly with the growing adoption of education technologies. Students’ labor – students’ test results, students’ content, students’ data – feeds the measurements used to reward or punish teachers. Students’ labor feeds the algorithms – algorithms that further this larger narrative about teacher inadequacies, sure, and that serve to financially benefit technology, testing, and textbook companies, the makers of today’s “teaching machines.”

    Teaching Machines and the Future of Collective Action

    The promise of teaching machines has long been to allow students to move “at their own pace” through the curriculum. “Personalized learning,” it’s often called today (although the phrase often refers only to “personalization” in terms of the pace, not in terms of the topics of inquiry). This means, supposedly, that instead of whole class instruction, the “work” of teaching changes: in the words of one education reformer, “with the software taking up chores like grading math quizzes and flagging bad grammar, teachers are freed to do what they do best: guide, engage, and inspire.”

    Again, it’s not clear how this changes the work of students.

    So what are the implications – not just pedagogically but politically – of students, their headphones on staring at their individual computer screens working alone through various exercises? Because let’s remember: teaching machines and all education technologies are ideological. What are the implications – not just pedagogically but politically – of these technologies’ emphasis on individualism, self-management, personal responsibility, and autonomy?

    What happens to discussion and debate, for example, in a classroom of teaching machines and “personalized learning”? What happens, in a world of schools catered to individual student achievement, to the community development that schools (at their best, at least) are also tasked to support?

    What happens to organizing? What happens to collective action? And by collectivity here, let’s be clear, I don’t mean simply “what happens to teachers’ unions”? If we think about The Teacher Wars and teaching machines side-by-side, we should recognize our analysis of (our actions surrounding) the labor issues of school need to go much deeper and more farther than that.

    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this review first appeared.

    Back to the essay

  • The Man Who Loved His Laptop

    The Man Who Loved His Laptop

    Her (2013)a review of Spike Jonze (dir.), Her (2013)
    by Mike Bulajewski
    ~
    I’m told by my sister, who is married to a French man, that the French don’t say “I love you”—or at least they don’t say it often. Perhaps they think the words are superfluous and it’s the behavior of the person you are in a relationship with tells you everything. Americans, on the other hand, say it to everyone—lovers, spouses, friends, parents, grandparents, children, pets—and as often as possible, as if quantity matters most. The declaration is also an event. For two people beginning a relationship, it marks a turning point and a new stage in the relationship.

    If you aren’t American, you may not have realized that relationships have stages. In America, they do. It’s complicated. First there are the three main thresholds of commitment: Dating, Exclusive Dating, then of course Marriage. There are three lesser pre-Dating stages: Just Talking, Hooking Up and Friends with Benefits; and one minor stage between Dating and Exclusive called Pretty Much Exclusive. Within Dating, there are several minor substages: number of dates (often counted up to the third date) and increments of physical intimacy denoted according to the well-known baseball metaphor of first, second, third and home base.

    There are also a number of rituals that indicate progress: updating of Facebook relationship statuses; leaving a toothbrush at each other’s houses; the aforementioned exchange of I-love-you’s; taking a vacation together; meeting the parents; exchange of house keys; and so on. When people, especially unmarried people talk about relationships, often the first questions are about these stages and rituals. In France the system is apparently much less codified. One convention not present in the United States is that romantic interest is signaled when a man invites a woman to go for a walk with him.

    The point is two-fold: first, although Americans admire and often think of French culture as holding up a standard for what romance ought to be, Americans act nothing like the French in relationships and in fact know very little about how they work in France. Second and more importantly, in American culture love is widely understood as spontaneous and unpredictable, and yet there is also an opposite and often unacknowledged expectation that relationships follow well-defined rules and rituals.

    This contradiction might explain the great public clamor over romance apps like Romantimatic and BroApp that automatically send your significant other romantic messages, either predefined or your own creation, at regular intervals—what philosopher of technology Evan Selinger calls (and not without justification) apps that outsource our humanity.

    Reviewers of these apps were unanimous in their disapproval, disagreeing only on where to locate them on a spectrum between pretty bad and sociopathic. Among all the labor-saving apps and devices, why should this one in particular be singled out for opprobrium?

    Perhaps one reason for the outcry is that they expose an uncomfortable truth about how easily romance can be automated. Something we believe is so intimate is revealed as routine and predictable. What does it say about our relationship needs that the right time to send a loving message to your significant other can be reduced to an algorithm?

    The routinization of American relationships first struck me in the context of this little-known fact about how seldom French people say “I love you.” If you had to launch one of these romance apps in France, it wouldn’t be enough to just translate the prewritten phrases into French. You’d have to research French romantic relationships and discover what are the most common phrases—if there are any—and how frequently text messages are used for this purpose. It’s possible that French people are too unpredictable, or never use text messages for romantic purposes, so the app is just not feasible in France.

    Romance is culturally determined. That American romance can be so easily automated reveals how standardized and even scheduled relationships already are. Selinger’s argument that automated romance undermines our humanity has some merit, but why stop with apps? Why not address the problem at a more fundamental level and critique the standardized courtship system that regulates romance. Doesn’t this also outsource our humanity?

    The best-selling relationship advice book The 5 Love Languages claims that everyone understands one of five love “languages” and the key to a happy relationship for each partner to learn to express love in the correct language. Should we be surprised if the more technically minded among us concludes that the problem of love can be solved with technology? Why not try to determine the precise syntax and semantics of these love languages, and attempt to express them rigorously and unambiguously in the same way that computer languages and communications protocols are? Can love be reduced to grammar?

    Spike Jonze’s Her (2013) tells the story of Theodore Twombly, a soon-to-be divorced writer who falls in love with Samantha, an AI operating system who far exceeds the abilities of today’s natural language assistants like Apple’s Siri or Microsoft’s Cortana. Samantha is not only hyper-intelligent, she’s also capable of laughter, telling jokes, picking up on subtle unspoken interpersonal cues, feeling and communicating her own emotions, and so on. Theodore falls in love with her, but there is no sense that their relationship is deficient because she’s not human. She is as emotionally expressive as any human partner, at least on film.

    Theodore works for a company called BeautifulHandwrittenLetters.com as a professional Cyrano de Bergerac (or perhaps a human Romantimatic), ghostwriting heartfelt “handwritten” letters on behalf of this clients. It’s an ironic twist: Samantha is his simulated girlfriend, a role which he himself adopts at work by simulating the feelings of his clients. The film opens with Theodore at his desk at work, narrating a letter from a wife to her husband on the occasion of their 50th wedding anniversary. He is a master of the conventions of the love letter. Later in the film, his work is discovered by a literary agent, and he gets an offer to have book published of his best work.

    [youtube https://www.youtube.com/watch?v=CxahbnUCZxY&w=560&h=315]

    But for all his (alleged) expertise as a romantic writer, Theodore is lonely, emotionally stunted, ambivalent towards the women in his life, and—at least before meeting Samantha—apparently incapable of maintaining relationships since he separated from his ex-wife Catherine. Highly sensitive, he is disturbed by encounters with women that go off the script: a phone sex encounter goes awry when the woman demands that he enact her bizarre fantasy of being choked with a dead cat; and on a date with a woman one night, she exposes a little too much vulnerability and drunkenly expresses her fear that he won’t call her. He abruptly and awkwardly ends the date.

    Theodore wanders aimlessly through the high tech city as if it is empty. With headphones always on, he’s withdrawn, cocooned in a private sonic bubble. He interacts with his device through voice, asking it to play melancholy songs and skipping angry messages from his attorney demanding that he sign the divorce papers already. At times, he daydreams of happier times when he and his ex-wife were together and tells Samantha how much he liked being married. At first it seems that Catherine left him. We wonder if he withdrew from the pain of his heartbreak. But soon a different picture emerges. When they finally meet to sign the divorce papers over lunch, Catherine accuses him of not being able to handle her emotions and reveals that he tried to get her on Prozac. She says to him “I always felt like you wished I could just be a happy, light, everything’s great, bouncy L.A. wife. But that’s not me.”

    So Theodore’s avoidance of real challenges and emotions in relationships turns out to be an ongoing problem—the cause, not the consequence, of his divorce. Starting a relationship with his operating systems Samantha is his latest retreat from reality—not from physical reality, but from the virtual reality of authentic intersubjective contact.

    Unlike his other relationships, Samantha is perfectly customized to his needs. She speaks his “love language.” Today we personalize our operating system and fill out online dating profile specifying exactly what kind of person we’re looking for. When Theodore installs Samantha on his computer for the first time, the two operations are combined with a single question. The system asks him how he would describe his relationship with his mother. He begins to reply with psychological banalities about how she is insufficiently attuned to his needs, and it quickly stops him, already knowing what he’s about. And so do we.

    That Theodore is selfish doesn’t mean that he is unfeeling, unkind, insensitive, conceited or uninterested in his new partners thoughts, feelings and goals. His selfishness is the kind that’s approved and even encouraged today, the ethically consistent selfishness that respects the right of others to be equally selfish. What he wants most of all is to be comfortable, to feel good, and that requires a partner who speaks his love language and nothing else, someone who says nothing that would veer off-script and reveal too many disturbing details. More precisely, Theodore wants someone who speaks what Lacan called empty speech: speech that obstructs the revelation of the subject’s traumatic desire.

    Objectification is a traditional problem between men and women. Men reduce women to mere bodies or body parts that exist only for sexual gratification, treating them as sex objects rather than people. The dichotomy is between the physical as the domain of materiality, animality and sex on one hand, and the spiritual realm of subjectivity, personality, agency and the soul on the other. If objectification eliminates the soul, then Theodore engages in something like the opposite, a subjectification which eradicates the body. Samantha is just a personality.

    Technology writer Nicholas Carr‘s new book The Glass Cage: Automation and Us (Norton, 2014) investigates the ways that automation and artificial intelligence dull our cognitive capacities. Her can be read as a speculative treatment of the same idea as it relates to emotion. What if the difficulty of relationships could be automated away? The film’s brilliant provocation is that it shows us a lonely, hollow world mediated through technology but nonetheless awash in sentimentality. It thwarts our expectations that algorithmically-generated emotion would be as stilted and artificial as today’s speech synthesizers. Samantha’s voice is warm, soulful, relatable and expressive. She’s real, and the feelings she triggers in Theodore are real.

    But real feelings with real sensations can also be shallow. As Maria Bustillo notes, Theodore is an awful writer, at least by today’s standards. Here’s the kind of prose that wins him accolades from everyone around him:

    I remember when I first started to fall in love with you like it was last night. Lying naked beside you in that tiny apartment, it suddenly hit me that I was part of this whole larger thing, just like our parents, and our parents’ parents. Before that I was just living my life like I knew everything, and suddenly this bright light hit me and woke me up. That light was you.

    In spite of this, we’re led to believe that Theodore is some kind of literary genius. Various people in his life compliment him on his skill and the editor of the publishing company who wants to publish his work emails to tell him how moved he and his wife were when they read them. What kind of society would treat such pedestrian writing as unusual, profound or impressive? And what is the average person’s writing like if Theodore’s services are worth paying for?

    Recall the cult favorite Idiocracy (2006) directed by Mike Judge, a science fiction satire set in a futuristic dystopia where anti-intellectualism is rampant and society has descended into stupidity. We can’t help but conclude that Her offers a glimpse into a society that has undergone a similar devolution into both emotional and literary idiocy.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org, where an earlier version of this review first appeared.

    Back to the essay

  • Program and Be Programmed

    Program and Be Programmed

    Programmed Visions: Software and Memory (MIT Press, 2013)a review of Wendy Chun, Programmed Visions: Software and Memory (MIT Press, 2013)
    by Zachary Loeb
    ~

    Type a letter on a keyboard and the letter appears on the screen, double-click on a program’s icon and it opens, use the mouse in an art program to draw a line and it appears. Yet knowing how to make a program work is not the same as knowing how or why it works. Even a level of skill approaching mastery of a complicated program does not necessarily mean that the user understands how the software works at a programmatic level. This is captured in the canonical distinctions between users and “power users,” on the one hand, and between users and programmers on the other. Whether being a power user or being a programmer gives one meaningful power over machines themselves should be a more open question than injunctions like Douglas Rushkoff’s “program or be programmed” or the general opinion that every child must learn to code appear to allow.

    Sophisticated computer programs give users a fantastical set of abilities and possibilities. But to what extent does this sense of empowerment depend on faith in the unseen and even unknown codes at work in a given program? We press a key on a keyboard and a letter appears on the screen—but do we really know why? These are some of the questions that Wendy Hui Kyong Chun poses in Programmed Visions: Software and Memory, which provides a useful history of early computing alongside a careful analysis of the ways in which computers are used—and use their users—today. Central to Chun’s analysis is her insistence “that a rigorous engagement with software makes new media studies more, rather than less, vapory” (21), and her book succeeds admirably in this regard.

    The central point of Chun’s argument is that computers (and media in general) rely upon a notion of programmability that has become part of the underlying societal logic of neoliberal capitalism. In a society where computers are tied ever more closely to power, Chun argues that canny manipulation of software restores a sense of control or sovereignty to individual users, even as their very reliance upon this software constitutes a type of disempowerment. Computers are the driving force and grounding metaphor behind an ideology that seeks to determine the future—a future that “can be bought and sold” and which “depends on programmable visions that extrapolate the future—or more precisely, a future—based on the past” (9).

    Yet, one of the pleasures of contemporary computer usage, is that one need not fully understand much of what is going on to be able to enjoy the benefits of the computer. Though we may use computer technology to answer critical questions, this does not necessarily mean we are asking critical questions about computer technology. As Chun explains, echoing Michel Foucault, “software, free or not, is embodied and participates in structures of knowledge-power” (21); users become tangled in these structures once they start using a given device or program. Much of this “knowledge-power” is bound up in the layers of code which make software function, the code is that which gives the machine the directions—that which ensures that the tapping of the letter “r” on the keyboard leads to that letter appearing on the screen. Nevertheless, this code typically goes unseen, especially as it becomes source code, and winds up being buried ever deeper, even though this source code is what “embodies the power of the executive, the power of enforcement” (27). Importantly, the ability to write code, the programmer’s skill, does not in and of itself provide systematic power: computers follow “a set of rules that programmers must follow” (28). A sense of power over certain aspects of a computer is still incumbent upon submitting to the control of other elements of the computer.

    Contemporary computers, and our many computer-esque devices (such as smart phones and tablets), are the primary sites in which most of us encounter the codes and programming about which Chun writes, but she takes lengths to introduce the reader to the history of programming. For it is against the historical backdrop of military research, during the Second World War, that one can clearly see the ways in which notions of control, the unquestioning following of orders, and hierarchies have long been at work within computation and programming. Beyond providing an enlightening aside into the vital role that women played in programming history, analyzing the early history of computing demonstrates how as a means of cutting down on repetitive work structured programming emerged that “limits the logical procedures coders can use, and insists that the program consist of small modular units, which can be called from the main program” (36). Gradually this emphasis on structured programming allows for more and more processes to be left to the machine, and thus processes and codes become hidden from view even as future programmers are taught to conform to the demands that will allow for new programs to successfully make use of these early programs. Therefore the processes that were once a result of expertise come to be assumed aspects of the software—they become automated—and it is this very automation (“automatic programming”) that “allows the production of computer-enabled human-readable code” (41).

    As the codes and programs become hidden by ever more layers of abstraction, the computer simultaneously and paradoxically appears to make more of itself visible (through graphic user interfaces, for example), while the code itself recedes ever further into the background. This transition is central to the computer’s rapid expansion into ever more societal spheres, and it is an expansion that Chun links to the influence of neoliberal ideology. The computer with its easy-to-use interfaces creates users who feel as though they are free and empowered to manipulate the machine even as they rely on the codes and programs that they do not see. Freedom to act becomes couched in code that predetermines the range and type of actions that the users are actually free to take. What transpires, as Chun writes, is that “interfaces and operating systems produce ‘users’—one and all” (67).

    Without fully comprehending the codes that lead from a given action (a user presses a button) to a given result, the user is positioned to believe ever more in the power of the software/hardware hybrid, especially as increased storage capabilities allow for computers to access vast informational troves. In so doing, the technologically-empowered user has been conditioned to expect a programmable world akin to the programmed devices they use to navigate that world—it has “fostered our belief in the world as neoliberal: as an economic game that follows certain rules” (92). And this takes place whether or not we understand who wrote those rules, or how they can be altered.

    This logic of programmability may be linked to inorganic machines, but Chun also demonstrates the ways in which this logic has been applied to the organic world as well. In truth, the idea that the organic can be programmed predates the computer; as Chun explains “breeding encapsulates an early logic of programmability… Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content… but also at the level of operationality” (124). In considering the idea that the organic can be programmed, what emerges is a sense of the way that programming has long been associated with a certain will to exert control over things be they organic or inorganic. Far from being a digression, Chun’s discussion of eugenics provides for a fascinating historic comparison given the way in which its decline in acceptance seems to dovetail with the steady ascendance of the programmable machine.

    The intersection of software and memory (or “software as memory”) is an essential matter to consider given the informational explosion that has occurred with the spread of computers. Yet, as Chun writes eloquently: “information is ‘undead’; neither alive nor dead, neither quite present nor absent” (134), since computers simultaneously promise to make ever more information available while making the future of much of this information precarious (insofar as access may rely upon software and hardware that no longer functions). Chun elucidates the ways in which the shift from analog to digital has permitted a wider number of users to enjoy the benefits of computers while this shift has likewise made much that goes on inside a computer (software and hardware) less transparent. While the machine’s memory may seem ephemeral and (to humans) illegible, accessing information in “storage” involves codes that read by re-writing elsewhere. This “battle of diligence between the passing and the repetitive” characterizing machine memory, Chun argues, “also characterizes content today” (170). Users rely upon a belief that the information they seek will be available and that they will be able to call upon it with a few simple actions, even though they do not see (and usually cannot see) the processes that make this information present and which do or do not allow it to be presented.

    When people make use of computers today they find themselves looking—quite literally—at what the software presents to them, yet in allowing this act of seeing the programming also has determined much of what the user does not see. Programmed Visions is an argument for recognizing that sometimes the power structures that most shape our lives go unseen—even if we are staring right at them.

    * * *

    With Programmed Visions, Chun has crafted a nuanced, insightful, and dense, if highly readable, contribution to discussions about technology, media, and the digital humanities. It is a book that demonstrates Chun’s impressive command of a variety of topics and the way in which she can engagingly shift from history to philosophy to explanations of a more technical sort. Throughout the book Chun deftly draws upon a range of classic and contemporary thinkers, whilst raising and framing new questions and lines of inquiry even as she seeks to provide answers on many other topics.

    Though peppered with many wonderful turns of phrase, Programmed Visions remains a challenging book. While all readers of Programmed Visions will come to it with their own background and knowledge of coding, programming, software, and so forth—the simple truth is that Chun’s point (that many people do not understand software sufficiently) may make many a reader feel somewhat taken aback. For most computer users—even many programmers and many whose research involves the study of technology and media—are quite complicit in the situation that Chun describes. It is the sort of discomforting confrontation that is valuable precisely because of the anxiety it provokes. Most users take for granted that the software will work the way they expect it to—hence the frustration bordering on fury that many people experience when suddenly the machine does something other than that which is expected provoking a maddened outburst of “why aren’t you working!” What Chun helps demonstrate is that it is not so much that the machines betray us, but that we were mistaken in our thinking that machines ever really obeyed us.

    It will be easy for many readers to see themselves as the user that Chun describes—as someone positioned to feel empowered by the devices they use, even as that power depends upon faith in forces the user cannot see, understand, or control. Even power users and programmers, on careful self-reflection may identify with Chun’s relocation of the programmer from a position of authority to a role wherein they too must comply with the strictures of the code presents an important argument for considerations of such labor. Furthermore, the way in which Chun links the power of the machine to the overarching ideology of neoliberalism makes her argument useful for discussions broader than those in media studies and the digital humanities. What makes these arguments particularly interesting is the way in which Chun locates them within thinking about software. As she writes towards the end of the second chapter, “this chapter is not a call to return to an age when one could see and comprehend the actions of our computers. Those days are long gone… Neither is this chapter an indictment of software or programming… It is, however, an argument against common-sense notions of software precisely because of their status as common sense” (92). Such a statement refuses to provide the anxious reader (who has come to see themselves as an uninformed user) with a clear answer, for it suggests that the “common-sense” clear answer is part of what has disempowered them.

    The weaving of historic details regarding computers during World War II and eugenics provide an excellent and challenging atmosphere against which Chun’s arguments regarding programmability can grow. Chun lucidly describes the embodiment and materiality of information and obsolescence that serve as major challenges confronting those who seek to manage and understand the massive informational flux that computer technology has enabled. The idea of information as “undead” is both amusing and evocative as it provides for a rich way of describing the “there but not there” of information, while simultaneously playing upon the slight horror and uneasiness that seems to be lurking below the surface in the confrontation with information.

    As Chun sets herself the difficult task of exploring many areas, there are some topics where the reader may be left wanting more. The section on eugenics presents a troubling and fascinating argument—one which could likely have been a book in and of itself—especially when considered in the context of arguments about cyborg selves and post-humanity, and it is a section that almost seems to have been cut short. Likewise the discussion of race (“a thread that has been largely invisible yet central,” 179), which is brought to the fore in the epilogue, confronts the reader with something that seems like it could in fact be the introduction for another book. It leaves the reader with much to contemplate—though it is the fact that this thread was not truly “largely invisible” that makes the reader upon reaching the epilogue wish that the book could have dealt with that matter at greater length. Yet, these are fairly minor concerns—that Programmed Visions leaves its readers re-reading sections to process them in light of later points is a credit to the text.

    Programmed Visions: Software and Memory is an alternatively troubling, enlightening, and fascinating book. It allows its reader to look at software and hardware in a new way, with a fresh insight about this act of sight. It is a book that plants a question (or perhaps subtly programs one into the reader’s mind): what are you not seeing, what power relations remain invisible, between the moment during which the “?” is hit on the keyboard and the moment it appears on the screen?


    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He has previously reviewed The People’s Platform by Astra Taylor and Social Media: A Critical Introduction by Christian Fuchs for boundary2.org.

    Back to the essay

  • Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Who Big Data Thinks We Are (When It Thinks We're Not Looking)

    Dataclysm: Who We Are (When We Think No One's Looking) (Crown, 2014)a review of Christian Rudder, Dataclysm: Who We Are (When We Think No One’s Looking) (Crown, 2014)
    by Cathy O’Neil
    ~
    Here’s what I’ve spent the last couple of days doing: alternatively reading Christian Rudder’s new book Dataclysm and proofreading a report by AAPOR which discusses the benefits, dangers, and ethics of using big data, which is mostly “found” data originally meant for some other purpose, as a replacement for public surveys, with their carefully constructed data collection processes and informed consent. The AAPOR folk have asked me to provide tangible examples of the dangers of using big data to infer things about public opinion, and I am tempted to simply ask them all to read Dataclysm as exhibit A.

    Rudder is a co-founder of OKCupid, an online dating site. His book mainly pertains to how people search for love and sex online, and how they represent themselves in their profiles.

    Here’s something that I will mention for context into his data explorations: Rudder likes to crudely provoke, as he displayed when he wrote this recent post explaining how OKCupid experiments on users. He enjoys playing the part of the somewhat creepy detective, peering into what OKCupid users thought was a somewhat private place to prepare themselves for the dating world. It’s the online equivalent of a video camera in a changing booth at a department store, which he defended not-so-subtly on a recent NPR show called On The Media, and which was written up here.

    I won’t dwell on that aspect of the story because I think it’s a good and timely conversation, and I’m glad the public is finally waking up to what I’ve known for years is going on. I’m actually happy Rudder is so nonchalant about it because there’s no pretense.

    Even so, I’m less happy with his actual data work. Let me tell you why I say that with a few examples.

    Who Are OKCupid Users?

    I spent a lot of time with my students this summer saying that a standalone number wouldn’t be interesting, that you have to compare that number to some baseline that people can understand. So if I told you how many black kids have been stopped and frisked this year in NYC, I’d also need to tell you how many black kids live in NYC for you to get an idea of the scope of the issue. It’s a basic fact about data analysis and reporting.

    When you’re dealing with populations on dating sites and you want to conclude things about the larger culture, the relevant “baseline comparison” is how well the members of the dating site represent the population as a whole. Rudder doesn’t do this. Instead he just says there are lots of OKCupid users for the first few chapters, and then later on after he’s made a few spectacularly broad statements, on page 104 he compares the users of OKCupid to the wider internet users, but not to the general population.

    It’s an inappropriate baseline, made too late. Because I’m not sure about you but I don’t have a keen sense of the population of internet users. I’m pretty sure very young kids and old people are not well represented, but that’s about it. My students would have known to compare a population to the census. It needs to happen.

    How Do You Collect Your Data?

    Let me back up to the very beginning of the book, where Rudder startles us by showing us that the men that women rate “most attractive” are about their age whereas the women that men rate “most attractive” are consistently 20 years old, no matter how old the men are.

    Actually, I am projecting. Rudder never actually specifically tells us what the rating is, how it’s exactly worded, and how the profiles are presented to the different groups. And that’s a problem, which he ignores completely until much later in the book when he mentions that how survey questions are worded can have a profound effect on how people respond, but his target is someone else’s survey, not his OKCupid environment.

    Words matter, and they matter differently for men and women. So for example, if there were a button for “eye candy,” we might expect women to choose more young men. If my guess is correct, and the term in use is “most attractive”, then for men it might well trigger a sexual concept whereas for women it might trigger a different social construct; indeed I would assume it does.

    Since this isn’t a porn site, it’s a dating site, we are not filtering for purely visual appeal; we are looking for relationships. We are thinking beyond what turns us on physically and asking ourselves, who would we want to spend time with? Who would our family like us to be with? Who would make us be attractive to ourselves? Those are different questions and provoke different answers. And they are culturally interesting questions, which Rudder never explores. A lost opportunity.

    Next, how does the recommendation engine work? I can well imagine that, once you’ve rated Profile A high, there is an algorithm that finds Profile B such that “people who liked Profile A also liked Profile B”. If so, then there’s yet another reason to worry that such results as Rudder described are produced in part as a result of the feedback loop engendered by the recommendation engine. But he doesn’t explain how his data is collected, how it is prompted, or the exact words that are used.

    Here’s a clue that Rudder is confused by his own facile interpretations: men and women both state that they are looking for relationships with people around their own age or slightly younger, and that they end up messaging people slightly younger than they are but not many many years younger. So forty year old men do not message twenty year old women.

    Is this sad sexual frustration? Is this, in Rudder’s words, the difference between what they claim they want and what they really want behind closed doors? Not at all. This is more likely the difference between how we live our fantasies and how we actually realistically see our future.

    Need to Control for Population

    Here’s another frustrating bit from the book: Rudder talks about how hard it is for older people to get a date but he doesn’t correct for population. And since he never tells us how many OKCupid users are older, nor does he compare his users to the census, I cannot infer this.

    Here’s a graph from Rudder’s book showing the age of men who respond to women’s profiles of various ages:

    dataclysm chart 1

    We’re meant to be impressed with Rudder’s line, “for every 100 men interested in that twenty year old, there are only 9 looking for someone thirty years older.” But here’s the thing, maybe there are 20 times as many 20-year-olds as there are 50-year-olds on the site? In which case, yay for the 50-year-old chicks? After all, those histograms look pretty healthy in shape, and they might be differently sized because the population size itself is drastically different for different ages.

    Confounding

    One of the worst examples of statistical mistakes is his experiment in turning off pictures. Rudder ignores the concept of confounders altogether, which he again miraculously is aware of in the next chapter on race.

    To be more precise, Rudder talks about the experiment when OKCupid turned off pictures. Most people went away when this happened but certain people did not:

    dataclysm chart 2

    Some of the people who stayed on went on a “blind date.” Those people, which Rudder called the “intrepid few,” had a good time with people no matter how unattractive they were deemed to be based on OKCupid’s system of attractiveness. His conclusion: people are preselecting for attractiveness, which is actually unimportant to them.

    But here’s the thing, that’s only true for people who were willing to go on blind dates. What he’s done is select for people who are not superficial about looks, and then collect data that suggests they are not superficial about looks. That doesn’t mean that OKCupid users as a whole are not superficial about looks. The ones that are just got the hell out when the pictures went dark.

    Race

    This brings me to the most interesting part of the book, where Rudder explores race. Again, it ends up being too blunt by far.

    Here’s the thing. Race is a big deal in this country, and racism is a heavy criticism to be firing at people, so you need to be careful, and that’s a good thing, because it’s important. The way Rudder throws it around is careless, and he risks rendering the term meaningless by not having a careful discussion. The frustrating part is that I think he actually has the data to have a very good discussion, but he just doesn’t make the case the way it’s written.

    Rudder pulls together stats on how men of all races rate women of all races on an attractiveness scale of 1-5. It shows that non-black men find their own race attractive and non-black men find black women, in general, less attractive. Interesting, especially when you immediately follow that up with similar stats from other U.S. dating sites and – most importantly – with the fact that outside the U.S., we do not see this pattern. Unfortunately that crucial fact is buried at the end of the chapter, and instead we get this embarrassing quote right after the opening stats:

    And an unintentionally hilarious 84 percent of users answered this match question:

    Would you consider dating someone who has vocalized a strong negative bias toward a certain race of people?

    in the absolute negative (choosing “No” over “Yes” and “It depends”). In light of the previous data, that means 84 percent of people on OKCupid would not consider dating someone on OKCupid.

    Here Rudder just completely loses me. Am I “vocalizing” a strong negative bias towards black women if I am a white man who finds white women and Asian women hot?

    Especially if you consider that, as consumers of social platforms and sites like OKCupid, we are trained to rank all the products we come across to ultimately get better offerings, it is a step too far for the detective on the other side of the camera to turn around and point fingers at us for doing what we’re told. Indeed, this sentence plunges Rudder’s narrative deeply into the creepy and provocative territory, and he never fully returns, nor does he seem to want to. Rudder seems to confuse provocation for thoughtfulness.

    This is, again, a shame. A careful conversation about the issues of what we are attracted to, what we can imagine doing, and how we might imagine that will look to our wider audience, and how our culture informs those imaginings, are all in play here, and could have been drawn out in a non-accusatory and much more useful way.


    _____

    Cathy O’Neil is a data scientist and mathematician with experience in academia and the online ad and finance industries. She is one of the most prominent and outspoken women working in data science today, and was one of the guiding voices behind Occupy Finance, a book produced by the Occupy Wall Street Alt Banking group. She is the author of “On Being a Data Skeptic” (Amazon Kindle, 2013), and co-author with Rachel Schutt of Doing Data Science: Straight Talk from the Frontline (O’Reilly, 2013). Her Weapons of Math Destruction is forthcoming from Random House. She appears on the weekly Slate Money podcast hosted by Felix Salmon. She maintains the widely-read mathbabe blog, on which this review first appeared.

    Back to the essay

  • The Eversion of the Digital Humanities

    The Eversion of the Digital Humanities

    image
    by Brian Lennon

    on The Emergence of the Digital Humanities by Steven E. Jones

    1

    Steven E. Jones begins his Introduction to The Emergence of the Digital Humanities (Routledge, 2014) with an anecdote concerning a speaking engagement at the Illinois Institute of Technology in Chicago. “[M]y hosts from the Humanities department,” Jones tells us,

    had also arranged for me to drop in to see the fabrication and rapid-prototyping lab, the Idea Shop at the University Technology Park. In one empty room we looked into, with schematic drawings on the walls, a large tabletop machine jumped to life and began whirring, as an arm with a router moved into position. A minute later, a student emerged from an adjacent room and adjusted something on the keyboard and monitor attached by an extension arm to the frame for the router, then examined an intricately milled block of wood on the table. Next door, someone was demonstrating finely machined parts in various materials, but mostly plastic, wheels within bearings, for example, hot off the 3D printer….

    What exactly, again, was my interest as a humanist in taking this tour, one of my hosts politely asked?1

    It is left almost entirely to more or less clear implication, here, that Jones’s humanities department hosts had arranged the expedition at his request, and mainly or even only to oblige a visitor’s unusual curiosity, which we are encouraged to believe his hosts (if “politely”) found mystifying. Any reader of this book must ask herself, first, if she believes this can really have occurred as reported: and if the answer to that question is yes, if such a genuinely unlikely and unusual scenario — the presumably full-time, salaried employees of an Institute of Technology left baffled by a visitor’s remarkable curiosity about their employer’s very raison d’être — warrants any generalization at all. For that is how Jones proceeds: by generalization, first of all from a strained and improbably dramatic attempt at defamiliarization, in the apparent confidence that this anecdote illuminating the spirit of the digital humanities will charm — whom, exactly?

    It must be said that Jones’s history of “digital humanities” is refreshingly direct and initially, at least, free of obfuscation, linking the emergence of what it denotes to events in roughly the decade preceding the book’s publication, though his reading of those events is tendentious. It was the “chastened” retrenchment after the dot-com bubble in 2000, Jones suggests (rather, just for example, than the bubble’s continued inflation by other means) that produced the modesty of companies like our beloved Facebook and Twitter, along with their modest social networking platform-products, as well as the profound modesty of Google Inc. initiatives like Google Books (“a development of particular interest to humanists,” we are told2) and Google Maps. Jones is clearer-headed when it comes to the disciplinary history of “digital humanities” as a rebaptism of humanities computing and thus — though he doesn’t put it this way — a catachrestic asseveration of traditional (imperial-nationalist) philology like its predecessor:

    It’s my premise that what sets DH apart from other forms of media studies, say, or other approaches to the cultural theory of computing, ultimately comes through its roots in (often text-based) humanities computing, which always had a kind of mixed-reality focus on physical artifacts and archives.3

    Jones is also clear-headed on the usage history of “digital humanities” as a phrase in the English language, linking it to moments of consolidation marked by Blackwell’s Companion to Digital Humanities, the establishment of the National Endowment for the Humanities Office for the Digital Humanities, and higher-education journalism covering the annual Modern Language Association of America conventions. It is perhaps this sensitivity to “digital humanities” as a phrase whose roots lie not in original scholarship or cultural criticism itself (as was still the case with “deconstruction” or “postmodernism,” even at their most shopworn) but in the dependent, even parasitic domains of reference publishing, grant-making, and journalism that leads Jones to declare “digital humanities” a “fork” of humanities computing, rather than a Kuhnian paradigm shift marking otherwise insoluble structural conflict in an intellectual discpline.

    At least at first. Having suggested it, Jones then discards the metaphor drawn from the tree structures of software version control, turning to “another set of metaphors” describing the digital humanities as having emerged not “out of the primordial soup” but “into the spotlight” (Jones, 5). We are left to guess at the provenance of this second metaphor, but its purpose is clear: to construe the digital humanities, both phenomenally and phenomenologically, as the product of a “shift in focus, driven […] by a new set of contexts, generating attention to a range of new activities” (5).

    Change; shift; new, new, new. Not a branch or a fork, not even a trunk: we’re now in the ecoverse of history and historical time, in its collision with the present. The appearance and circulation of the English-language phrase “digital humanities” can be documented — that is one of the things that professors of English like Jones do especially well, when they care to. But “changes in the culture,” much more broadly, within only the last ten years or so? No scholar in any discipline is particularly well trained, well positioned, or even well suited to diagnosing those; and scholars in English studies won’t be at the top of anyone’s list. Indeed, Jones very quickly appeals to “author William Gibson” for help, settling on the emergence of the digital humanities as a response to what Gibson called “the eversion of cyberspace,” in its ostensibly post-panopticist colonization of the physical world.6 It makes for a rather inarticulate and self-deflating statement of argument, in which on its first appearance eversion, ambiguously, appears to denote the response as much as its condition or object:

    My thesis is simple: I think that the cultural response to changes in technology, the eversion, provides an essential context for understanding the emergence of DH as a new field of study in the new millennium.7

    Jones offers weak support for the grandiose claim that “we can roughly date the watershed moment when the preponderant collective perception changed to 2004–2008″ (21). Second Life “peaked,” we are told, while World of Warcraft “was taking off”; Nintendo introduced the Wii; then Facebook “came into its own,” and was joined by Twitter and Foursquare, then Apple’s iPhone. Even then (and setting aside the question of whether such benchmarking is acceptable evidence), for the most part Jones’s argument, such as it is, is that something is happening because we are talking about something happening.

    But who are we? Jones’s is the typical deference of the scholar to the creative artist, unwilling to challenge the latter’s utter dependence on meme engineering, at least where someone like Gibson is concerned; and Jones’s subsequent turn to the work of a scholar like N. Katherine Hayles on the history of cybernetics comes too late to amend the impression that the order of things here is marked first by gadgets, memes, and conversations about gadgets and memes, and only subsequently by ideas and arguments about ideas. The generally unflattering company among whom Hayles is placed (Clay Shirky, Nathan Jurgenson) does little to move us out of the shallows, and Jones’s profoundly limited range of literary reference, even within a profoundly narrowed frame — it’s Gibson, Gibson, Gibson all the time, with the usual cameos by Bruce Sterling and Neal Stephenson — doesn’t help either.

    Jones does have one problem with the digital humanities: it ignores games. “My own interest in games met with resistance from some anonymous peer reviewers for the program for the DH 2013 conference, for example,” he tells us (33). “[T]he digital humanities, at least in some quarters, has been somewhat slow to embrace the study of games” (59). “The digital humanities could do worse than look to games” (36). And so on: there is genuine resentment here.

    But nobody wants to give a hater a slice of the pie, and a Roman peace mandates that such resentment be sublated if it is to be, as we say, taken seriously. And so in a magical resolution of that tension, the digital humanities turns out to be constituted by what it accidentally ignores or actively rejects, in this case — a solution that sweeps antagonism under the rug as we do in any other proper family. “[C]omputer-based video games embody procedures and structures that speak to the fundamental concerns of the digital humanities” (33). “Contemporary video games offer vital examples of digital humanities in practice” (59). If gaming “sounds like what I’ve been describing as the agenda of the digital humanities, it’s no accident” (144).

    Some will applaud Jones’s niceness on this count. It may strike others as desperately friendly, a lingering under a big tent as provisional as any other tent, someday to be replaced by a building, if not by nothing. Few of us will deny recognition to Second Life, World of Warcraft, Wii, Facebook, Twitter, etc. as cultural presences, at least for now. But Jones’s book is also marked by slighter and less sensibly chosen benchmarks, less sensibly chosen because Jones’s treatment of them, in a book whose ambition is to preach to the choir, simply imputes their cultural presence. Such brute force argument drives the pathos that Jones surely feels, as a scholar — in the recognition that among modern institutions, it is only scholarship and the law that preserve any memory at all — into a kind of melancholic unconscious, from whence his objects return to embarrass him. “[A]s I write this,” we read, “QR codes show no signs yet of fading away” (41). Quod erat demonstrandum.

    And it is just there, in such a melancholic unconscious, that the triumphalism of the book’s title, and the “emergence of the digital humanities” that it purports to mark, claim, or force into recognition, straightforwardly gives itself away. For the digital humanities will pass away, and rather than being absorbed into the current order of things, as digital humanities enthusiasts like to believe happened to “high theory” (it didn’t happen), the digital humanities seems more likely, at this point, to end as a blank anachronism, overwritten by the next conjuncture in line with its own critical mass of prognostications.

    2

    To be sure, who could deny the fact of significant “changes in the culture” since 2000, in the United States at least, and at regular intervals: 2001, 2008, 2013…? Warfare — military in character, but when that won’t do, economic; of any interval, but especially when prolonged and deliberately open-ended; of any intensity, but especially when flagrantly extrajudicial and opportunistically, indeed sadistically asymmetrical — will do that to you. No one who sets out to historicize the historical present can afford to ignore the facts of present history, at the very least — but the fact is that Jones finds such facts unworthy of comment, and in that sense, for all its pretense to worldliness, The Emergence of the Digital Humanities is an entirely typical product of the so-called ivory tower, wherein arcane and plain speech alike are crafted to euphemize and thus redirect and defuse the conflicts of the university with other social institutions, especially those other institutions who command the university to do this or do that. To take the ambiguity of Jones’s thesis statement (as quoted above) at its word: what if the cultural response that Jones asks us to imagine, here, is indeed and itself the “eversion” of the digital humanities, in one of the metaphorical senses he doesn’t quite consider: an autotomy or self-amputation that, as McLuhan so enjoyed suggesting in so many different ways, serves to deflect the fact of the world as a whole?

    There are few moments of outright ignorance in The Emergence of the Digital Humanities — how could there be, in the security of such a narrow channel?6 Still, pace Jones’s basic assumption here (it is not quite an argument), we might understand the emergence of the digital humanities as the emergence of a conversation that is not about something — cultural change, etc. — as much as it is an attempt to avoid conversing about something: to avoid discussing such cultural change in its most salient and obvious flesh-and-concrete manifestations. “DH is, of course, a socially constructed phenomenon,” Jones tells us (7) — yet “the social,” here, is limited to what Jones himself selects, and selectively indeed. “This is not a question of technological determinism,” he insists. “It’s a matter of recognizing that DH emerged, not in isolation, but as part of larger changes in the culture at large and that culture’s technological infrastructure” (8). Yet the largeness of those larger changes is smaller than any truly reasonable reader, reading any history of the past decade, might have reason to expect. How pleasant that such historical change was “intertwined with culture, creativity, and commerce” (8) — not brutality, bootlicking, and bank fraud. Not even the modest and rather opportunistic gloom of Gibson’s 2010 New York Times op-ed entitled “Google’s Earth” finds its way into Jones’s discourse, despite the extended treatment that Gibson’s “eversion” gets here.

    From our most ostensibly traditional scholarly colleagues, toiling away in their genuine and genuinely book-dusty modesty, we don’t expect much respect for the present moment (which is why they often surprise us). But The Emergence of the Digital Humanities is, at least in ambition, a book about cultural change over the last decade. And such historiographic elision is substantive — enough so to warrant impatient response. While one might not want to say that nothing good can have emerged from the cultural change of the period in question, it would be infantile to deny that conditions have been unpropitious in the extreme, possibly as unpropitious as they have ever been, in U.S. postwar history — and that claims for the value of what emerges into institutionality and institutionalization, under such conditions, deserve extra care and, indeed defense in advance, if one wants not to invite a reasonably caustic skepticism.

    When Jones does engage in such defense, it is weakly argued. To construe the emergence of the digital humanities as non-meaninglessly concurrent with the emergence of yet another wave of mass educational automation (in the MOOC hype that crested in 2013), for example, is wrong not because Jones can demonstrate that their concurrence is the concurrence of two entirely segregated genealogies — one rooted in Silicon Valley ideology and product marketing, say, and one utterly and completely uncaused and untouched by it — but because to observe their concurrence is “particularly galling” to many self-identified DH practitioners (11). Well, excuse me for galling you! “DH practitioners I know,” Jones informs us, “are well aware of [the] complications and complicities” of emergence in an age of precarious labor, “and they’re often busy answering, complicating, and resisting such opportunistic and simplistic views” (10). Argumentative non sequitur aside, that sounds like a lot of work undertaken in self-defense — more than anyone really ought to have to do, if they’re near to the right side of history. Finally, “those outside DH,” Jones opines in an attempt at counter-critique, “often underestimate the theoretical sophistication of many in computing,” who “know better than many of their humanist critics that their science is provisional and contingent” (10): a statement that will only earn Jones super-demerits from those of such humanist critics — they are more numerous than the likes of Jones ever seem to suspect — who came to the humanities with scientific and/or technical aptitudes, sometimes with extensive educational and/or professional training and experience, and whose “sometimes world-weary and condescending skepticism” (10) is sometimes very well-informed and well-justified indeed, and certain to outlive Jones’s winded jabs at it.

    Jones is especially clumsy in confronting the charge that the digital humanities is marked by a forgetting or evasion of the commitment to cultural criticism foregrounded by other, older and now explicitly competing formations, like so-called new media studies. Citing the suggestion by “media scholar Nick Montfort” that “work in the digital humanities is usually considered to be the digitization and analysis of pre-digital cultural artifacts, not the investigation of contemporary computational media,” Jones remarks that “Montfort’s own work […] seems to me to belie the distinction,”7 as if Montfort — or anyone making such a statement — were simply deluded about his own work, or about his experience of a social economy of intellectual attention under identifiably specific social and historical conditions, or else merely expressing pain at being excluded from a social space to which he desired admission, rather than objecting on principle to a secessionist act of imagination.8

    3

    Jones tells us that he doesn’t “mean to gloss over the uneven distribution of [network] technologies around the world, or the serious social and political problems associated with manufacturing and discarding the devices and maintaining the server farms and cell towers on which the network depends” — but he goes ahead and does it anyway, and without apology or evident regret. “[I]t’s not my topic in this book,” we are told, “and I’ve deliberately restricted my focus to the already-networked world” (3). The message is clear: this is a book for readers who will accept such circumscription, in what they read and contemplate. Perhaps this is what marks the emergence of the digital humanities, in the re-emergence of license for restrictive intellectual ambition and a generally restrictive purview: a bracketing of the world that was increasingly discredited, and discredited with increasing ferocity, just by the way, in the academic humanities in the course of the three decades preceding the first Silicon Valley bubble. Jones suggests that “it can be too easy to assume a qualitative hierarchical difference in the impact of networked technology, too easy to extend the deeper biases of privilege into binary theories of the global ‘digital divide’” (4), and one wonders what authority to grant to such a pronouncement when articulated by someone who admits he is not interested, at least in this book, in thinking about how an — how any — other half lives. It’s the latter, not the former, that is the easy choice here. (Against a single, entirely inconsequential squib in Computer Business Review entitled “Report: Global Digital Divide Getting Worse,” an almost obnoxiously perfunctory footnote pits “a United Nations Telecoms Agency report” from 2012. This is not scholarship.)

    Thus it is that, read closely, the demand for finitude in the one capacity in which we are non-mortal — in thought and intellectual ambition — and the more or less cheerful imagination of an implied reader satisfied by such finitude, become passive microaggressions aimed at another mode of the production of knowledge, whose expansive focus on a theoretical totality of social antagonism (what Jones calls “hierarchical difference”) and justice (what he calls “binary theories”) makes the author of The Emergence of the Digital Humanities uncomfortable, at least on its pages.

    That’s fine, of course. No: no, it’s not. What I mean to say is that it’s unfair to write as if the author of The Emergence of the Digital Humanities alone bears responsibility for this particular, certainly overdetermined state of affairs. He doesn’t — how could he? But he’s getting no help, either, from most of those who will be more or less pleased by the title of his book, and by its argument, such as it is: because they want to believe they have “emerged” along with it, and with that tension resolved, its discomforts relieved. Jones’s book doesn’t seriously challenge that desire, its (few) hedges and provisos notwithstanding. If that desire is more anxious now than ever, as digital humanities enthusiasts find themselves scrutinized from all sides, it is with good reason.
    _____

    Brian Lennon is Associate Professor of English and Comparative Literature at Pennsylvania State University and the author of In Babel’s Shadow: Multilingual Literatures, Monolingual States (University of Minnesota Press, 2010).
    _____

    notes:
    1. Jones, 1.
    Back to the essay

    2. Jones, 4. “Interest” is presumed to be affirmative, here, marking one elision of the range of humanistic critical and scholarly attitudes toward Google generally and the Google Books project in particular. And of the unequivocally less affirmative “interest” of creative writers as represented by the Authors Guild, just for example, Jones has nothing to say: another elision.
    Back to the essay

    3. Jones, 13.
    Back to the essay

    4. See Gibson.
    Back to the essay

    5. Jones, 5.
    Back to the essay

    6. As eager as any other digital humanities enthusiast to accept Franco Moretti’s legitimation of DH, but apparently incurious about the intellectual formation, career and body of work that led such a big fish to such a small pond, Jones opines that Moretti’s “call for a distant reading” stands “opposed to the close reading that has been central to literary studies since the late nineteenth century” (Jones, 62). “Late nineteenth century” when exactly, and where (and how, and why)? one wonders. But to judge by what Jones sees fit to say by way of explanation — that is, nothing at all — this is mere hearsay.
    Back to the essay

    7. Jones, 5. See also Montfort.
    Back to the essay

    8. As further evidence that Montfort’s statement is a mischaracterization or expresses a misunderstanding, Jones suggests the fact that “[t]he Electronic Literature Organization itself, an important center of gravity for the study of computational media in which Montfort has been instrumental, was for a time housed at the Maryland Institute for Technology in the Humanities (MITH), a preeminent DH center where Matthew Kirschenbaum served as faculty advisor” (Jones, 5–6). The non sequiturs continue: “digital humanities” includes the study of computing and media because “self-identified practitioners doing DH” study computing and media (Jones, 6); the study of computing and media is also “digital humanities” because the study of computing and digital media might be performed at institutions like MITH or George Mason University’s Roy Rosenzweig Center for History and New Media, which are “digital humanities centers” (although the phrase “digital humanities” appears nowhere in their names); “digital humanities” also adequately describes work in “media archaeology” or “media history,” because such work has “continued to influence DH” (Jones, 6); new media studies is a component of the digital humanities because some scholars suggest it is so, and others cannot be heard to object, at least after one has placed one’s fingers in one’s ears; and so on.
    Back to the essay

    (feature image: “Bandeau – Manifeste des Digital Humanities,” uncredited; originally posted on flickr.)

  • The People’s Platform by Astra Taylor

    The People’s Platform by Astra Taylor

    image

    Or is it? : Astra Taylor’s The People’s Platform

    Review by Zachary Loeb

    ~

    Imagine not using the Internet for twenty-four hours.

    Really: no Internet from dawn to dawn.

    Take a moment to think through the wide range of devices you would have to turn off and services you would have to avoid to succeed in such a challenge. While a single day without going online may not represent too outlandish an ordeal such an endeavor would still require some social and economic gymnastics. From the way we communicate with friends to the way we order food to the way we turn in assignments for school or complete tasks in our jobs – our lives have become thoroughly entangled with the Internet. Whether its power and control are overt or subtle the Internet has come to wield an impressive amount of influence over our lives.

    All of which should serve to raise a discomforting question – so, who is in control of the Internet? Is the Internet a fantastically democratic space that puts the power back in the hands of people? Is the Internet a sly mechanism for vesting more power in the hands of the already powerful, whilst distracting people with a steady stream of kitschy content and discounted consumerism? Or, is the Internet a space relying on levels of oft-unseen material infrastructures with a range of positive and negative potentialities? These are the questions that Astra Taylor attempts to untangle in her book The People’s Platform: Taking Back Power and Culture in the Digital Age (Metropolitan Books, 2014). It is the rare example of a book where the title itself forms a thesis statement of sorts: the Internet was and can be a platform for the people but this potential has been perverted, and thus there needs to be a “taking back” of power (and culture).

    At the outset Taylor locates her critique in the space between the fawning of the “techno-optimists” and the grousing of the “techno-skeptics.” Far from trying to assume a “neutral” stance, Taylor couches her discussion of the “techno” by stepping back to consider the social, political, and economic forces that shape the “techno” reality that inspires optimism and skepticism. Taylor, therefore, does not build her argument upon a discussion of the Internet as such but builds her argument around a discussion of the Internet as it is and as it could be. Unfortunately the “as it currently is” of this “new media” evinces that: “Corporate power and the quest for profit are as fundamental to new media as old.” (8)

    Thus Taylor sets up the conundrum of the Internet – it is at once a media platform with a great deal of democratic potential, and yet this potential has been continually appropriated for bureaucratic, technocratic, and indeed plutocratic purposes.

    Over the course of The People’s Platform Taylor moves from one aspect of the Internet (and its related material infrastructures) to another – touching upon a range of issues from the Internet’s history, to copyright and the way it has undermined “cultural creators” ability to earn a living, the way the Internet persuades and controls, across the issues of journalism and e-waste, to the ways in which the Internet can replicate the misogyny and racism of the offline world.

    With her background as a documentary filmmaker (she directed the film The Examined Life [which is excellent]) Taylor is skilled in cutting deftly from one topic to the next, though this particular experience also gives her cause to dwell at length upon the matter of how culture is created and supported in the digital age. Indeed as a maker of independent films Taylor is particularly attuned to the challenges of making culturally valuable content in a time when free copies spread rapidly on-line. Here too Taylor demonstrates the link to larger economic forces – there are still highly successful “stars” and occasional stories of “from nowhere” success, but the result is largely that those attempting to eke out a nominal subsistence find it increasingly challenging to do so.

    As the Internet becomes the principle means of dissemination of material “cultural creators” find themselves bound to a system wherein the ultimate remuneration rarely accrues back to them. Likewise the rash of profit-driven mergers and shifting revenue streams has resulted in a steady erosion of the journalistic field. It is not – as Taylor argues – that there is a lack of committed “cultural creators” and journalists working today, it is that they are finding it increasingly difficult to sustain their efforts. The Internet, as Taylor describes it, is certainly making many people enormously wealthy but those made wealthy are more likely to be platform owners (think Google or Facebook) than those who fill those platforms with the informational content that makes them valuable.

    Though the Internet may have its roots in massive public investment and though the value of the Internet is a result of the labor of Internet users (example: Facebook makes money by selling advertisements based on the work you put it in on your profile), the Internet as it is now is often less of an alternative to society than it is a replication. The biases of the offline world are replicated in the digital realm, as Taylor puts it:

    “While the Internet offers marginalized groups powerful and potentially world-changing opportunities to meet and act together, new technologies also magnify inequality, reinforcing elements of the old order. Networks do not eradicate power: they distribute it in different ways, shuffling hierarchies and producing new mechanisms of exclusion.” (108)

    Thus, the Internet – often under the guise of promoting anonymity – can be a site for an explosion of misogyny, racism, classism, and an elitism blossoming from a “more-technologically-skilled-than-thou” position. There are certainly many “marginalized groups” and individuals trying to use the Internet to battle their historical silencing, but for every social justice minded video there is a comment section seething with the grunts of trolls. Meanwhile behind this all stand the same wealthy corporate interests that enjoyed privileged positions before the rise of the Internet. These corporate forces can wield the power they gain from the Internet to steer and persuade Internet users in such a way that the “curated experience” of the Internet is increasingly another way of saying, “what a major corporation thinks you (should) want.”

    image

    Breaking through the ethereal airs of the Internet, Taylor also grounds her argument in the material realities of the digital realm. While it is true that more and more people are increasingly online, Taylor emphasizes that there are still many without access and that the high-speed access enjoyed by some is not had by one and all. Furthermore, all of this access, all of these fanciful devices, all of these democratic dreams are reliant upon a physical infrastructure shot through with dangerous mining conditions, wretched laboring facilities, and toxic dumps where discarded devices eventually go to decay. Those who are able to enjoy the Internet as a positive feature in their day to day life are rarely the same people who worked in the mines, the assembly plants, or who will have to live on the land that has been blighted by e-waste.

    While Taylor refuses to ignore the many downsides associated with the Internet age she remains fixed on its positive potential. The book concludes without offering a simplistic list of solutions but nevertheless ends with a sense that those who care about the Internet’s non-corporate potential need to work to build a “sustainable digital future” (183). Though there are certainly powerful interests profiting from the current state of the Internet the fact remains that (in a historical sense) the Internet is rather young, and there is still time to challenge the shape it is taking. Considering what needs to be done, Taylor notes: “The solutions we need require collective, political action.” (218)

    It is a suggestion that carries a sentiment that people can band together to reassert control over the online commons that are steadily being enclosed by corporate interests. By considering the Internet as a public utility (a point being discussed at the moment in regards to Net Neutrality) and by focusing on democratic values instead of financial values – it may be possible for people to reverse (or at least slow) the corporate wave which is washing over the Internet.

    After all, the Internet is the result of massive public investment, why is it that it has been delivered into corporate hands? Ultimately, Taylor concludes (in a chapter titled “In Defense of the Commons: A Manifesto for Sustainable Culture”) that if people want the Internet to be a “people’s platform” that they will have to organize and fight for it (“collective, political”). In a time when the Internet is an important feature of society, it makes a difference if the Internet is an open “people’s platform” or a highly (if subtly) controlled corporate theme park. “The People’s Platform” requires people who care to raise their voices…such as the people who have read Astra Taylor’s book, perhaps.

    * * * * *

    With The People’s Platform Astra Taylor has made an effective and interesting contribution to the discussion around the nature of the Internet and its future. By emphasizing a political and economic critique she is able to pull the Internet away from a utopian fantasy in order to analyze it in terms of the competing forces that have shaped (and continue to shape) it. The perspective that Taylor brings, as a documentary filmmaker, allows her to drop the journalistic façade of objectivity in order to genuinely and forcefully engage with issues pertaining to the compensation of cultural creators in the age of digital dissemination. Whilst the sections that Taylor writes on the level of misogyny one encounters online and the section on e-waste make this book particularly noteworthy. Though each chapter of The People’s Platform could likely be extended into an entire book, it is in their interconnections that Taylor is able to demonstrate the layers of interconnected issues that are making such a mess of the Internet today. For the problem facing the online realm is not just corporate control – it is a slew of issues that need to be recognized in total (and in their interconnected nature) if any type of response is to be mounted.

    Though The People’s Platform is ostensibly about a conflict regarding the future of the Internet, the book is itself a site of conflicting sentiments. Though Taylor – at the outset – aims to avoid aligning herself with the “cheerleaders of progress” or “the prophets of doom” (4) the book that emerges is one that is in the stands of the “cheerleaders of progress” (even if with slight misgivings about being in those stands). The book’s title suggests that even with all of the problems associated with the Internet it still represents something promising, something worth fighting to “take back.” It is a point that is particularly troublesome to consider after Taylor’s description of labor conditions and e-waste. For one of the main questions that emerges towards the end of Taylor’s book – though it is not one she directly poses – makes problematic the book’s title, that question being: which “people” are being described in “the people’s platform?”

    image

    It may be tempting to answer such a question with a simplistic “well, all of the people” yet such a response is inadequate in light of the way that Taylor’s book clearly discusses the layers of control and dominance one finds surrounding the Internet. Can the Internet be “the people’s platform” for writers, journalists, documentary filmmakers, and activists with access to digital tools? Sure. But what of those described in the e-waste chapter – people living in oppressive conditions and toiling in factories where building digital devices puts them at risk of cancer or disassembling such devices poisons them and their families? Those people count as well, but those upon whom “the people’s platform” is built seem to be crushed beneath it, not able to get on top of it – to stand on “the people’s platform” is to stand on the hunched shoulders of others. It is true that Taylor takes this into account in emphasizing that something needs to be done to recognize and rectify this matter – but insofar as the material tools “the people” use to reach the Internet are built upon the repression and oppression of other people, it sours the very notion of the Internet as “the people’s platform.”

    This in turn raises another question: what would a genuine “people’s platform” look like? In the conclusion to the book Taylor attempts to answer this question by arguing for political action and increased democratic control over the Internet; however, one can easily imagine classifying the Internet as a “public utility” without doing anything to change the laboring conditions of those who build devices. Indeed, the darkly amusing element of The People’s Platform is that Taylor answers this question brilliantly on the second page of her book and then spends the following two hundred and thirty pages ignoring this answer.

    Taylor begins The People’s Platform with an anecdote about her youth in the pre-Internet (or pre-high speed Internet) era, wherein she recalls working on a small personally assembled magazine (a “zine”) which she would then have printed and distribute to friends and a variety of local shops. Looking back upon her time making zines, Taylor writes:
    “Today any kid with a smartphone and a message has the potential to reach more people with the push of a button that I did during two years of self-publishing.” (2)

    These lines from Taylor come only a sentence after she considers how her access to easy photocopying (for her zine) made it easier for her than it had been for earlier would-be publishers. Indeed, Taylor recalls:

    “a veteran political organizer told me how he and his friends had to sell blood in order to raise the funds to buy a mimeograph machine so they could make a newsletter in the early sixties.” (2)

    There are a few subtle moments in the above lines (from the second page of Taylor’s book) that say far more about a “people’s platform” than they let on. It is true that a smartphone gives a person “the potential to reach more people” but as the rest of Taylor’s book makes clear – it is not necessarily the case that people really do “reach more people” online. There are certainly wild success stories, but for “any kid” their reach with their smartphone may not be much greater than the number of people reachable with a photocopied zine. Furthermore, the zine audience might have been more engaged and receptive than the idle scanner of Tweets or Facebook updates – the smartphone may deliver more potential but actually achieve less.

    Nevertheless, the key aspects is Taylor’s comment about the “veteran political organizer” – this organizer (“and his friends”) were able to “buy a mimeograph machine so they could make a newsletter.” Is this different from buying a laptop computer, Internet access, and a domain name? Actually? Yes. Yes, it is. For once those newsletter makers bought the mimeograph machine they were in control of it – they did not need to worry about its Terms of Service changing, about pop-up advertisements, about their movements being tracked through the device, about the NSA having installed a convenient backdoor – and frankly there’s a good chance that the mimeograph machine they purchased had a much longer life than any laptop they would purchase today. Again – they bought and were able to control the means for disseminating their message, one cannot truly buy all of the means necessary for disseminating an online message (when one includes cable, ISP providers, etc…).

    The case of the mimeograph machine and the Internet is the question of what types of technologies represent genuine people’s platforms and which result in potential “people’s platforms” (note the quotation marks)? This is not to say that mimeograph machines are perfect (after all somebody did build that machine) but when considering technology in a democratic sense it is important to puzzle over whether or not (to borrow Lewis Mumford’s terminology) the tool itself is “authoritarian” or “democratic.” The way the Internet appears in Taylor’s book – with its massive infrastructure, propensity for centralized control, material reality built upon toxic materials – should at the very least make one question to what extent the Internet is genuinely a democratic “people’s” tool. Or, whether or not it is simply such a tool for those who are able to enjoy the bulk of the benefits and a minimum of the downsides. Taylor clearly does not want to be accused of being a “prophet of doom” – or of being a prophet for profit – but the sad result is that she jumps over the genuine people’s platform she describes on the second page in favor of building an argument for a platform that, by book’s end, seems to hardly be one for “the people” in any but a narrow sense of “the people.”

    The People’s Platform: Taking Back Power and Culture in the Digital Age is a well written, solidly researched, and effectively argued book that raises many valuable questions. The book offers no simplistic panaceas but instead forces the reader to think through the issues – oftentimes by forcing them to confront uncomfortable facts about digital technologies (such as e-waste). As Taylor uncovers and discusses issue after bias after challenge regarding the Internet the question that haunts her text is whether or not the platform she is describing – the Internet – is really worthy of being called “The People’s Platform”? If so, to which “people” does this apply?

    The People’s Platform is well worth reading – but it is not the end of the conversation. It is the beginning of the conversation.

    And it is a conversation that is desperately needed.

    __

    The People’s Platform: Taking Back Power and Culture in the Digital Age
    by Astra Taylor
    Metropolitan Books, 2014

    __

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck, which is where this review originally appeared.

  • Cinema & Painting

    Cinema & Painting

    image


    Michelle Menzies and b2er Daniel Morgan, with The Adam Art Gallery Te Pātaka Toi, present Cinema & Painting:

    Cinema & Painting examines the intersection of two screen-based arts within the wider frame of digital culture. Organised around artists, both contemporary and historical, who make volumetric cinemas and paintings that spill off the wall, the exhibition maps the genealogy and continuing life of a modernist tradition of depth. By addressing the materiality of projective space—that physical zone beyond the picture plane activated by the body of the spectator in conjunction with the beam of the projector or the intricacies of painted forms—Cinema & Painting examines the interconnection of these arts not only in pictorial but in explicitly phenomenological terms.

    Hosted from 11 February to 11 May 2014. Find out more here, and check out the representative images for the gallery below.
    (cover photo: Phil Solomon, film still from American Falls, 2000-12. Digital video, altered archival footage.)


    image




















    Colin McCahon, View from the Top of the Cliff, 1971. Acrylic on paper.


    image
        Len Lye, stencil from Musical Poster #1, 1940. Paint on heavy card, occasional pencil inscriptions.


    image
        Jim Davis, film still from Sea Rhythms, 1971. 35mm transferred to digital video.


    image
        Matt Saunders, film still from Century Rolls, 2012. Digital video.


    MORE FROM THE GALLERY

  • The Digital Turn

    The Digital Turn

    800px-Culture_d'amandiers

    David Golumbia and The b2 Review look to digital culture

    ~
    I am pleased and honored to have been asked by the editors of boundary 2 to inaugurate a new section on digital culture for The b2 Review.

    The editors asked me to write a couple of sentences for the print journal to indicate the direction the new section will take, which I’ve included here:

    In the new section of the b2 Review, we’ll be bringing the same level of critical intelligence and insight—and some of the same voices—to the study of digital culture that boundary 2 has long brought to other areas of literary and cultural studies. Our main focus will be on scholarly books about digital technology and culture, but we will also branch out to articles, legal proceedings, videos, social media, digital humanities projects, and other emerging digital forms.

    While some might think it late in the day for boundary 2 to be joining the game of digital cultural criticism, I take the time lag between the moment at which thoroughgoing digitization became an unavoidable reality (sometime during the 1990s) and the first of the major literary studies journals to dedicate part of itself to digital culture as indicative of a welcome and necessary caution with regard to the breathless enthusiasm of digital utopianism. As humanists our primary intellectual commitment is to the deeply embedded texts, figures, and themes that constitute human culture, and precisely the intensity and thoroughgoing nature of the putative digital revolution must give somebody pause—and if not humanists, who?

    Today, the most overt mark of the digital in humanities scholarship goes by the name Digital Humanities, but it remains notable how little interaction there is between the rest of literary studies and that which comes under the DH rubric. That lack of interaction goes in both directions: DH scholars rarely cite or engage directly with the work the rest of us do, and the rest of literary studies rarely cites DH work, especially when DH is taken in its “narrow” or most heavily quantitative form. The enterprises seem, at times, to be entirely at odds, and the rhetoric of the digital enthusiasts who populate DH does little to forestall this impression. Indeed, my own membership in the field of DH has long been a vexed question, despite being one of the first English professors in the country to be hired to a position for which the primary specialization was explicitly indicated as Digital Humanities (at the University of Virginia in 2003), and despite being a humanist whose primary area is “digital studies,” and the inability of scholars “to be” or “not to be” members of a field in which they work is one of the several ways that DH does not resemble other developments in the always-changing world of literary studies.

    800px-054_Culture_de_fraises_en_hauteur_et_sous_serre_à_Plougastel

    Earlier this month, along with my colleague Jennifer Rhee, I organized a symposium called Critical Approaches to Digital Humanities sponsored by the MATX PhD program at Virginia Commonwealth University, where Prof. Rhee and I teach in the English Department. One of the conference participants, Fiona Barnett of Duke and HASTAC, prepared a Storify version of the Twitter activity at the symposium that provides some sense of the proceedings. While it followed on the heels and was continuous with panels such as the ‘Dark Side of the Digital Humanities’ at the 2013 MLA Annual Convention, and several at recent American Studies Association Conventions, among others, this was to our knowledge the first standalone DH event that resembled other humanities conferences as they are conducted today. Issues of race, class, gender, sexuality, and ability were primary; cultural representation and its relation to (or lack of relation to) identity politics was of primary concern; close reading of texts both likely and unlikely figured prominently; the presenters were diverse along several different axes. This arose not out of deliberate planning so much as organically from the speakers whose work spoke to the questions we wanted to raise.

    I mention the symposium to draw attention to what I think it represents, and what the launching of a digital culture section by boundary 2 also represents: the considered turning of the great ship of humanistic study toward the digital. For too long enthusiasts alone have been able to stake out this territory and claim special and even exclusive insight with regard to the digital, following typical “hacker” or cyberlibertarian assertions about the irrelevance of any work that does not proceed directly out of knowledge of the computer. That such claims could even be taken seriously has, I think, produced a kind of stunned silence on the part of many humanists, because it is both so confrontational and so antithetical to the remit of the literary humanities from comparative philology to the New Criticism to deconstruction, feminism and queer theory. That the core of the literary humanities as represented by so august an institution as boundary 2 should turn its attention there both validates the sense of digital enthusiasts of the medium’s importance, but should also provoke them toward a responsibility toward the project and history of the humanities that, so far, many of them have treated with a disregard that at times might be characterized as cavalier.

    -David Golumbia

    Browse All Digital Studies Reviews