b2o: boundary 2 online

Tag: cognitive science

  • tante — Artificial Saviors

    tante — Artificial Saviors

    tante

    Content Warning: The following text references algorithmic systems acting in racist ways towards people of color.

    Artificial intelligence and thinking machines have been key components in the way Western cultures, in particular, think about the future. From naïve positivist perspectives, as illustrated by the Rosie the Robot maid from 1962’s TV show The Jetsons, to ironic reflections on the reality of forced servitude to one’s creator and quasi-infinite lifespans in Douglas Adams’s Hitchhiker’s Guide to the Galaxy’s Marvin the Paranoid Android, as well as the threatening, invisible, disembodied, cruel HAL 9000 in Arthur C. Clarke’s Space Odyssey series and its total negation in Frank Herbert’s Dune books, thinking machines have shaped a lot of our conceptions of society’s future. Unless there is some catastrophic event, the future seemingly will have strong Artificial Intelligences (AI). They will appear either as brutal, efficient, merciless entities of power or as machines of loving grace serving humankind to create a utopia of leisure, self-expression and freedom from the drudgery of labor.

    Those stories have had a fundamental impact on the perception of current technologic trends and developments. The digital turn has increasingly made growing parts of our social systems accessible to automation and software agents. Together with a 24/7 onslaught of increasingly optimistic PR messages by startups, the accompanying media coverage has prepared the field for a new kind of secular techno-religion: The Church of AI.

    A Promise Fulfilled?

    For more than half a century, experts in the field have maintained that genuine, human-level artificial intelligence has always been just around the corner, has been “about 10 to 20 years away.” Today’s experts and spokespeople continue to express the same timeline for their hopes. Asking experts and spokespeople in the field, that number has stayed mostly unchanged until today.

    In 2017 AI is the battleground that the current IT giants are fighting over: for years, Google has developed machine learning techniques and has integrated them into their conversational assistant which people carry around installed in their smart devices. It’s gotten quite good at answering simple questions or triggering simple tasks: “OK Google, how far is it from here to Hamburg,” tells me that given current traffic it will take me 1 hour and 43 minutes to get there. Google’s assistant also knows how to use my calendar and email to warn me to leave the house in time for my next appointment or tell me that a parcel I was expecting has arrived.

    Facebook and Microsoft are experimenting with and propagating intelligent chat bots as the future of computer interfaces. Instead of going to a dedicated web page to order flowers, people will supposedly just access a chat interface of a software service that dispatches their request in the background. But this time, it will be so much more pleasant than the experience everyone is used to when calling automated calling systems. Press #1 if you believe.

    Old science fiction tropes get dusted off and re-released with a snazzy iPhone app to make them seem relevant again on an almost daily basis.

    Nonetheless, the promise is always the same: given the success that automation of manufacturing and information processing has had in the last decades, AI is considered to be not only plausible or possible but, in fact, almost a foregone conclusion. In support of this, advocates (such as Google’s Ray Kurzweil) typically cite “Moore’s Law,”[1] an observation about the increasing quantity and quality of transistors, as being in direct correlation to the growing “intelligence” in digital services or cyber-physical systems like thermostats or “smart” lights.

    Looking at other recent reports, a pattern emerges. Google’s AI lab recently trained a neural network to do lip-reading and found it better than human lip-readers (Chung, et al. 2016): where human experts were only able to pick the right word 12.4% of the time, Google’s neural network could reach 52.3% when being pointed at footage from BBC politics shows.

    Another recent example from Google’s research department, which just shows how many resources Google invests into machine learning and AI: Google has trained a system of neural networks to translate different human languages (in their example, English, Japanese and Korean) into one another (Schuster, Johnson and Thorat 2016). This is quite the technical feat, given that most translation engines have to be meticulously tweaked to translate between two languages. But Google’s researchers finish their report with a very different proposition:

    The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?….This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. (Schuster, Johnson and Thorat 2016)

    Google’s researchers interpret the capabilities of the neural network as expressions of the neural network creating a common super-language, one language to finally express all other languages.

    These current examples of success stories and narratives illustrate a fundamental shift in the way scientists and developers think about AI, a shift that perfectly resonates with the idea that AI has spiritual and transcendent properties. AI developments used to focus on building structured models of the world to enable reasoning. Whether researchers used logic or sets or newer modeling frameworks like RDF,[2] the basic idea was to construct “Intelligence” on top of a structure of truths and statements about the world. Intentionally modeled not by accident on basic logic, a lot of it looked like the first sessions in a traditional logic 101 lecture: All humans die. Aristotle is a human. Therefore, Aristotle will die.

    But all these projects failed. Explicitly modeling the structures of the world hit a wall of inconsistencies rather early when natural language and human beings got involved. The world didn’t seem to follow the simple hierarchic structures some computer scientists hoped it would. And even when it came to very structured, abstract areas of life, the approach never took off. Projects like, for example, expressing the Canadian income tax in a prolog[3] model (Sherman 1987) never got past the abstract planning stage. RDF and the idea of the “semantic web,” the web of structured data allowing software agents to gather information and reason based on it, are still somewhat relevant in academic circles, but have failed to capture wide adoption in real world use cases.

    And then came neural networks.

    Neural networks are the structure behind most of the current AI projects having any impact, whether it’s translation of human language, self-driving cars or recognizing objects and people on pictures and video. Neural networks work in a fundamentally different way from the traditional bottom-up approaches that defined much of the AI research in the last decades of the 20th century. Based on a simplified mathematical model of human neurons, networks of said neurons can be “trained” to react in a certain way.

    Say you need a neural network to automatically detect cats on pictures. First, you need an input layer with enough neurons to assign one to every pixel of the pictures you want to feed it. You add an output layer with two neurons, one signaling “cat” and one signaling “not a cat.” Now you add a few internal layers of neurons and connect them to each other. Input gets fed into the network through the input layer. The internal layers do their thing and make the neurons in the output layer “fire.” But the necessary knowledge is not yet ingrained into the network—it needs to be trained.

    There are different ways of training these networks, but they all come down to letting the network process a large amount of training data with known properties. For our example, a substantial set of pictures with cats would be necessary. When processing these pictures, the network gets positive feedback if the right neuron (the one signaling the detection of a cat) fires and strengthens the connections that lead to this result. Where it has a 50/50 chance of being right on the first try, that chance will quickly improve to the point that it will reach very good results, given that the set of training data is good enough. To evaluate the quality of the network, it then is tested against different pictures of cats and pictures without cats.

    Neural networks are really good at learning to detect structures (objects in images, sound patterns, connections in data streams) but there’s a catch: even when a neural network is really good at its task, it’s largely impossible for humans to say why: neural networks are just sets of neurons and their weighted connections. But what does the weight of 1.65 say about a connection? What are its semantics? What do the internal layers and neurons actually mean? Nobody knows.

    Many currently available services based on these technologies can achieve impressive results. Cars are able to drive as well if not better and safer than human drivers (given Californian conditions of light, lack of rain or snow and sizes of roads), automated translations of language can almost instantly give people at least an idea of what the rest of the world is talking about, and Google’s photo service allows me to search for “mountain” and shows me pictures of mountains in my collection. Those services surely feel intelligent. But are they really?

    Despite optimistic reports about another big step towards “true” AI (like in the movies!) that tech media keeps churning out like a machine, just looking at recent months the trouble with the current mainstream in AI has recently become quite obvious.

    In June 2015, Google’s Photos service was involved in a scandal: their AI was tagging faces of people of color with the term “gorilla” (Bergen 2015). Google quickly pointed out how difficult image recognition was and “fixed” the issue by blocking its AI from applying that specific tag, promising a “long term solution.” And even just staying with the image detection domain, there have been, in fact, numerous examples of algorithms acting in ways that don’t imply too much intelligence: cameras trained on Western, white faces detect people with Asian descent as “blinking” (Rose 2010); algorithms employed as impartial “beauty judges” seemingly don’t like dark skin (Levin 2016). The list goes on and on.

    While there seems to be a big consensus among thought leaders, AI companies, and tech visionaries that AI is inevitable and imminent, the definition of “intelligence” seems to be less than obvious. Is an entity intelligent if it can’t explain its reasoning?

    John Searle already explained this argument in the “Chinese Room“ thought experiment (Searle 1980): Searle proposes a computer program that can act convincingly as if it understands Chinese by taking in Chinese input, transforming it in some algorithmic way to output a response of Chinese characters. Does that machine really understand Chinese? Or is it just an automaton simulating understanding Chinese? Searle continues the experiment by assuming that the rules used by the machine get translated to readable English for a person to follow. A person locked in a room with these rules, pencil and paper could respond to every Chinese text given to that person as convincingly as the machine could. But few would propose that that person does now “understand” Chinese in the sense that a human being who knows Chinese does.

    Current trends in the reception of AI seem to disagree: if a machine can do something that used to be only possible for human cognition, it surely must be intelligent. This assumption of Intelligence serves as foundation for a theory of human salvation: if machines are already a little intelligent (putting them into the same category as humans) and machines only get faster and more efficient, isn’t it reasonable to assume that they will solve the issues that humans have struggled with for ages?

    But how can a neural network save us if it can’t even distinguish monkeys from humans?

    Thy Kingdom Come 2.0

    The story of AI is a technology narrative only at first glance. While it does depend on technology and technological progress, faster processors, and cleverer software libraries (ironically written and designed by human beings), it is really a story about automation, biases and implicit structures of power.

    Technologists who have traditionally been very focused on the scientific method, on verifiable processes and repeatable experiments have recently opened themselves to more transcendent arguments: the proposition of a neural network, of an AI creating a generic ideal language to express different human languages as one structure (Schuster, Johnson and Thorat 2016) is a first very visible step of “upgrading” an automated process to becoming more than meets the eye. The multi-language-translation network is not an interesting statistical phenomenon that needs reflection by experts in the analyzed languages and the cultures using them with regards to their structural, social similarities and the way they influence(d) one another. Rather, it is but a miraculous device making steps towards creating an ideal language that would have made Ludwig Wittgenstein blush.[4]

    But language and translation isn’t the only area in which these automated systems are being tested. Artificial intelligences are being trained to predict people’s future economic performance, their shopping profile, and their health. Other machines are deployed to predict crime hotspots, to distribute resources and to optimize production of goods.

    But while predicting crimes still gets most people feeling uncomfortable, the idea that machines are the supposedly objective arbiters of goods and services is met with far less skepticism. But “goods and services” can include a great deal more than ordinary commercial transactions. If the machine gives one candidate a 33% chance of survival and the other one 45%, who should you give the heart transplant to?

    "Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
    “Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

    Computers cannot lie, they just act according to their programming. They don’t discriminate against people based on their gender, race or background. At least that’s the popular opinion that very happily assigns computers and software systems the role of the objective arbiter of truth and fairness. People are biased, imperfect, and error-prone, so why shouldn’t we find the best processes and decision algorithms and put them into machines to dispose fair and optimal rulings efficiently and correctly? Isn’t that the utopian ideal of a fair and just society in which machines automate not just manual labor but also the decisions that create conflict and attract corruption and favoritism?

    The idea of computers as machines of truth is being challenged more and more each day, especially given new AI trends; in traditional algorithmic systems, implicit biases were hard-coded into the software. They could be analyzed, patched. Closely mirroring the scientific method, this ideal world view saw algorithms getting better, becoming fairer with every iteration. But how to address implicit biases or discriminations when the internal structure of a system cannot be effectively analyzed or explained? When AI systems make predictions based on training data, who can check whether the original data wasn’t discriminatory or whether it’s still suitable for use today?

    One original promise of computers—amongst others—had to do with accountability: code could be audited to legitimize its application within sociotechnical systems of power. But current AI trends have replaced this fundamental condition for the application of algorithms with belief.

    The belief is that simple simulacra of human neurons will—given enough processing power and learning data—evolve to be Superman. We can characterize this approach as a belief system because it has immunized itself against criticism: when an AI system fails horribly, creates or amplifies existing social discrimination or violence, the dogma of AI proponents often tends to be that it just needs more training, needs to be fed more random data to create better internal structures, better “truths.” Faced with a world of inconsistencies and chaos, the hope is that some neural network, given enough time and data, will make sense of it, even though we might not be able to truly understand it.

    Religion is a complex topic without one simple definition to apply to things to decide whether they are, in fact, religions. Religions are complex social systems of behaviors, practices and social organization. Following Wittgenstein’s ideas about language games, it might not even be possible to completely and selectively define religion. But there are patterns that many popular religions share.

    Many do, for example, share the belief in some form of transcendental power such as a god or a pantheon or even more abstract conceptual entities. Religions also tend to provide a path towards achieving greater, previously unknowable truths, truths about the meaning of life, of suffering, of Good itself. Being social structures, there often is some form of hierarchy or a system to generate and determine status and power within the group. This can be a well-defined clergy or less formal roles based on enlightenment, wisdom, or charity.

    While this is in no way anywhere close to a comprehensive list of attributes of religions, these key aspects can help analyze the religiousness of the AI narrative.

    Singulatarianism

    Here I want to focus on one very specific, influential sub-group within the whole AI movement. And no other group within tech displays religious structure more explicitly than the singulatarians.

    Singulatarians believe that the creation of adaptable AI systems will spark a rapid and ever increasing growth in these systems’ capabilities. This “runaway reaction” of cycles of self-improvement will lead to one or more artificial super-intelligences surpassing all human mental and cognitive capabilities. This point is called “the Singularity” which will be—according to singulatarians—followed by a phase of extremely rapid technological developments whose speed and structure will be largely incomprehensible to human consciousness. At this point the AI(s) will (and according to most singulatarians shall) take control of most aspects of society. While the possibility of the Super-AI taking over by force is always lingering around in the back of singulatarians’ minds, the dominant position is that humans will and should hand over power to the AI for the good of the people, for the good of society.

    Here we see singulatarianism taking the idea that computers and software are machines of truth to its extreme. Whether it’s the distribution of resources and wealth, or the structure of the law and regulation, all complex questions are reduced to a system of equations that an AI will solve perfectly, or at least so close to perfectly that human beings might not even understand said perfection.

    According to the “gospel” as taught by the many proponents of the Singularity, the explosive growth in technology will provide machines that people can “upload” their consciousness to, thus providing human beings with durable, replaceable bodies. The body, and with it death itself, are supposedly being transcended, creating everlasting life in the best of all possible worlds watched over by machines of loving grace, at least in theory.

    While the singularity has existed as an idea (if not the name) since at least the 1950s, only recently did singulatarians gain “working prototypes.” Trained AI systems are able to achieve impressive cognitive feats even today and the promise of continuous improvement that’s—seemingly—legitimized by references to Moore’s Law makes this magical future almost inevitable.

    It’s very obvious how the Singularity can be, no, must be characterized as a religious idea: it presents an ersatz-god in the form of a super-AI that is beyond all human understanding and reasoning. Quoting Ray Kurzweil from his The Age of Spiritual Machines: “Once a computer achieves human intelligence it will necessarily roar past it” (Kurzweil 1999). Kurzweil insists that surpassing human capabilities is a necessity. Computers are the newborn gods of silicon and code that—once awakened—will leave us, its makers, in the dust. It’s not a question of human agency but a law of the universe, a universal truth. (Not) coincidentally Kurzweil’s own choice of words in this book is deeply religious, starting with the title of the book.

    With humans therefore unable to challenge an AI’s decisions, human beings’ goal is to be to work within the world as defined and controlled by the super-AI. The path to enlightenment lies in the acceptance of the super-AI and by helping every form of scientific progress to finally achieve everlasting life through digital uploads of consciousness on to machines. Again quoting Kurzweil: “The ethical debates are like stones in a stream. The water runs around them. You haven’t seen any biological technologies held up for one week by any of these debates” (Kurzweil 2003 ).  Ethical debates are in Kurzweil’s perception fundamentally pointless with the universe and technology as god necessarily moving past them—regardless of what the result of such debates might ever be. Technology transcends every human action, every decision, every wish. Thy will be done.

    Because the intentions and reasoning of the super-AI being are opaque to human understanding, society will need people to explain, rationalize, and structure the AI’s plans for the people. The high-priests of the super-AI (such as Ray Kurzweil) are already preparing their churches and sermons.

    Not every proponent of AI goes as far as the singulatarians go. But certain motifs keep appearing even in supposedly objective and scientific articles about AI, the artificial control system for (parts of) human society probably being the most popular: AIs are supposed to distribute power in smart grids for example (Qudaih and Mitani 2011) or decide fully automatically where police should focus their attention (Perry et al 2013). The second example (usually referred to as “predictive policing”) illustrates this problem probably the best: all training data used to build the models that are supposed to help police be more “efficient” is soaked in structural racism and violence. A police trained on data that always labels people of color as suspect will keep on seeing innocent people of color as suspect.

    While there is value to automating certain dangerous or error-prone processes, like for example driving cars in order to protect human life or protect the environment, extending that strategy to society as a whole is a deeply problematic approach.

    The leap of faith that is required to truly believe in not only the potential but also the reality of these super-powered AIs doesn’t only leave behind the idea of human exceptionalism, (which in itself might not even be too bad), but the idea of politics as a social system of communication. When decisions are made automatically without any way for people to understand the reasoning, to check the way power acts and potentially discriminates, there is no longer any political debate apart from whether to fall in line or to abolish the system all together. The idea that politics is an equation to solve, that social problems have an optimal or maybe even a correct solution is not only a naïve technologist’s dream but, in fact, a dangerous and toxic idea making the struggle of marginalized groups, making any political program that’s not focused on optimizing[5] the status quo, unthinkable.

    Singulatarianism is the most extreme form, but much public discourse about AI is based on quasi-religious dogmas of the boundless realizable potential of AIs and life. These dogmas understand society as an engineering problem looking for an optimal solution.

    Daemons in the Digital Ether

    Software services on Unix systems are traditionally called “daemons,” a word from mythology that refers to god-like forces of nature. It’s an old throw-away programmer joke that seems like a precognition of sorts looking at today.

    Even if we accept that AI has religious properties, that it serves as a secular ersatz-religion for the STEM-oriented crowd, why should that be problematic?

    Marc Andreessen, venture capitalist and one of the louder proponents of the new religion, claimed in 2011 that “software is eating the world.” (Andreessen 2011) And while statements about the present and future from VC leaders should always be taken with a grain of salt, given that they are probably pitching their latest investment, in this case Andreessen was right: software and automation are slowly swallowing increasing aspects of everyday life. The digitalization of even mundane actions and structures, the deployment of “smart” devices in private homes and the public sphere, the reality of social life happening on technological platforms all help to give algorithmic systems more and more access to people’s lives and realities. Software is eating the world, and what it gnaws on, it standardizes, harmonizes, and structures in ways that ease further software integration

    The world today is deeply cyber-physical. The separation of the digital and the “real” worlds that sociologist Nathan Jurgenson fittingly called “digital dualism” (Jurgenson 2011) can these days be called an obvious fallacy. Virtual software systems, hosted “in the cloud,” define whether we will get health care, how much we’ll have to pay for a loan and in certain cases even whether we may cross a border or not. These processes of power were traditionally “running on” social systems, on government organs or organizations; or maybe just individuals are now moving into software agents, removing the risky, biased human factor, as well as checks and balances.

    The issue at hand is not the forming of a new tech-based religion itself. The problem emerges from the specific social group promoting it, its ignorance towards this matter and the way that group and its paradigms and ideals are seen in the world. The problem is not the new religion but the way its supporters propose it as science.

    Science, technology, engineering, math—abbreviated as STEM—currently take center stage when it comes to education but also when it comes to consulting the public on important matters. Scientists, technologists, engineers and mathematicians are not only building their own models in the lab but are creating and structuring the narratives that are debatable. Science as a tool to separate truth from falsehood is always deeply political, even more so in a democracy. By defining the world and what is or is not, science does not just structure a society’s model of the world but also elevates its experts to high and esteemed social positions.

    With the digital turn transforming and changing so many aspects of everyday life, the creators and designers of digital tools are—in tandem with a society hungry for explanations of the ongoing economic, technological and social changes—forming their own privileged caste, a caste whose original defining characteristic was its focus on the scientific method.

    When AI morphed from idea or experiment to belief system, hackers, programmers, “data scientists,”[6] and software architects became the high priests of a religious movement that the public never identified and parsed as such. The public’s mental checks were circumvented by the hidden switch of categories. In Western democracies the public is trained to listen to scientists and experts in order to separate objective truth from opinion. Scientists are perceived as impartial, only obligated to the truth and the scientific method. Technologists and engineers inherited that perceived neutrality and objectivity given their public words, a direct line into the public’s collective consciousness.

    On the other hand, the public does have mental guards against “opinion” and “belief” in place that get taught to each and every child in school from a very young age. Those things are not irrelevant in the public discourse—far from it—but the context they are evaluated in is different, more critical. This protection, this safeguard is circumvented when supposedly objective technologists propose their personal tech-religion as fact.

    Automation has always both solved and created problems: products became easier, safer, quicker or mainly cheaper to produce, but people lost their jobs and often the environment suffered. In order to make a decision, in order to evaluate the good and bad aspects of automation, society always relied on experts analyzing these systems.

    Current AI trends turn automation into a religion, slowly transforming at least semi-transparent systems into opaque systems whose functionality and correctness can neither be verified nor explained. By calling these systems “intelligent” a certain level of agency is implied, a kind of intentionality and personalization.[7] Automated systems whose neutrality and fairness is constantly implied and reaffirmed through ideas of godlike machines governing the world with trans-human intelligence are being blessed with agency and given power removing the actual entities of power from the equation.

    But these systems have no agency. Meticulously trained in millions of iterations on carefully chosen and massaged data sets, these “intelligences” just automate the application of the biases and values of the organizations deploying and developing them as many scientists as Cathy O’Neil in her book Weapons of Math Destruction illustrates:

    Here we see that models, despite their reputation for impartiality, reflect goals and ideology. When I removed the possibility of eating Pop-Tarts at every meal, I was imposing my ideology on the meals model. It’s something we do without a second thought. Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. (O’Neil 2016, 21)

    For many years, Facebook has refused all responsibility for the content on their platform and the way it is presented; the same goes for Google and its search products. Whenever problems emerge, it is “the algorithm” that “just learns from what people want.” AI systems serve as useful puppets doing their masters’ bidding without even requiring visible wires. Automated systems predicting areas of crime claim not to be racist despite targeting black people twice as often as white ones (Pulliam-Moore 2016). The technologist Maciej Cegłowski probably said it best: “Machine learning is like money laundering for bias.”

    Amen

    The proponents of AI aren’t just selling their products and services. They are selling a society where they are in power, where they provide the exegesis for the gospel of what “the algorithm” wants: Kevin Kelly, founder of Wired magazine, leading technologist and evangelical Christian, even called his book on this issue What Technology Wants (Kelly 2011) imbuing technology itself with agency and a will. And all that without taking responsibility for it. Because progress and—in the end—the singularity are inevitable.

    But this development is not a conspiracy or an evil plan. It grew from a society desperately demanding answers and scientists and technologists eagerly providing them. From deeply rooted cultural beliefs in the general positivity of technological progress, and from the trust in the powers of truth creation of the artifacts the STEM sector produces.

    The answer to the issue of an increasingly powerful and influential social group hardcoding its biases into the software actually running our societies cannot be to turn back time and de-digitalize society. Digital tools and algorithmic systems can serve a society to create fairer, more transparent processes that are, in fact, not less but more accountable.

    But these developments will require a reevaluation of the positioning, status and reception of the tech and science sectors. The answer will require the development of social and political tools to observe, analyze and control the power wielded by the creators of the essential technical structures that our societies rely on.

    "Through AI, Human Might Literally Create God" (image source: video by Big Think (IBM) and pho.to)
    “Through AI, Human Might Literally Create God” (image source: video by Big Think (IBM) and pho.to)

    Current AI systems can be useful for very specific tasks, even in matters of governance. The key is to analyze, reflect, and constantly evaluate the data used to train these systems. To integrate perspectives of marginalized people, of people potentially affected negatively even in the first steps of the process of training these systems. And to stop offloading responsibility for the actions of automated systems to the systems themselves, instead of holding accountable the entities deploying them, the entities giving these systems actual power.

    Amen.

    _____

    tante (tante@tante.cc) is a political computer scientist living in Germany. His work focuses on sociotechnical systems and the technological and economic narratives shaping them. He has been published in WIRED, Spiegel Online, and VICE/Motherboard among others. He is a member of the other wise net work, otherwisenetwork.com.

    Back to the essay

    _____

    Notes

    [1] Moore’s Law describes the observation that the number of transistors per square inch doubles roughly every 2 years (or every 18 months, depending on which version of the law is cited) made popular by Intel co-founder Gordon Moore.

    [2] https://www.w3.org/RDF/.

    [3] Prolog is a purely logical programming language that expresses problems as resolutions to logical expressions

    [4] In the Philosophical Investigations (1953) Ludwig Wittgenstein argued against language somehow corresponding to reality in a simple way. He used the concept of “language games” illustrating that meanings of language overlap and are defined by the individual use of language rejecting the idea of an ideal, objective language.

    [5] Optimization always operates in relationship to a specific goal codified in the metric the optimization system uses to compare different states and outcomes with one another. “Objective” or “general” optimizations of social systems are therefore by definition impossible.

    [6] We used to call them “statisticians.”

    [7] The creation of intelligence, of life as a feat is traditionally reserved to the gods of old. This is another link to religious world views as well as a rejection of traditional religions which is less than surprising in a subculture that’s most of the fan base of current popular atheists such as Richard Dawkins or Sam Harris. Vocal atheist Sam Harris himself being an open supporter of the new Singularity religion is just the cherry on top of this inconsistency sundae.

    _____

    Works Cited

  • How We Think About Technology (Without Thinking About Politics)

    How We Think About Technology (Without Thinking About Politics)

    N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)a review of N. Katherine Hayles, How We Think: Digital Media and Contemporary Technogenesis (Chicago, 2012)
    by R. Joshua Scannell

    ~

    In How We Think, N Katherine Hayles addresses a number of increasingly urgent problems facing both the humanities in general and scholars of digital culture in particular. In keeping with the research interests she has explored at least since 2002’s Writing Machines (MIT Press), Hayles examines the intersection of digital technologies and humanities practice to argue that contemporary transformations in the orientation of the University (and elsewhere) are attributable to shifts that ubiquitous digital culture have engendered in embodied cognition. She calls this process of mutual evolution between the computer and the human technogenesis (a term that is mostly widely associated with the work of Bernard Stiegler, although Hayles’s theories often aim in a different direction from Stiegler’s). Hayles argues that technogenesis is the basis for the reorientation of the academy, including students, away from established humanistic practices like close reading. Put another way, not only have we become posthuman (as Hayles discusses in her landmark 1999 University of Chicago Press book, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics), but our brains have begun to evolve to think with computers specifically and digital media generally. Rather than a rearguard eulogy for the humanities that was, Hayles advocates for an opening of the humanities to digital dromology; she sees the Digital Humanities as a particularly fertile ground from which to reimagine the humanities generally.

    Hayles is an exceptional scholar, and while her theory of technogenesis is not particularly novel, she articulates it with a clarity and elegance that are welcome and useful in a field that is often cluttered with good ideas, unintelligibly argued. Her close engagement with work across a range of disciplines – from Hegelian philosophy of mind (Catherine Malabou) to theories of semiosis and new media (Lev Manovich) to experimental literary production – grounds an argument about the necessity of transmedial engagement in an effective praxis. Moreover, she ably shifts generic gears over the course of a relatively short manuscript, moving from quasi-ethnographic engagement with University administrators, to media archaeology a la Friedrich Kittler, to contemporary literary theory, with grace. Her critique of the humanities that is, therefore, doubles as a praxis: she is actually producing the discipline-flouting work that she calls on her colleagues to pursue.

    The debate about the death and/or future of the humanities is weather worn, but Hayles’s theory of technogenesis as a platform for engaging in it is a welcome change. For Hayles, the technogenetic argument centers on temporality, and the multiple temporalities embedded in computer processing and human experience. She envisions this relation as cybernetic, in which computer and human are integrated as a system through the feedback loops of their coemergent temporalities. So, computers speed up human responses, which lag behind innovations, which prompt beta test cycles at quicker rates, which demand humans to behave affectively, nonconsciously. The recursive relationship between human duration and machine temporality effectively mutates both. Humanities professors might complain that their students cannot read “closely” like they used to, but for Hayles this is a fault of those disciplines to imagine methods in step with technological changes. Instead of digital media making us “dumber” by reducing our attention spans, as Nicholas Carr argues, Hayles claims that the movement towards what she calls “hyper reading” is an ontological and biological fact of embodied cognition in the age of digital media. If “how we think” were posed as a question, the answer would be: bodily, quickly, cursorily, affectively, non-consciously.

    Hayles argues that this doesn’t imply an eliminative teleology of human capacity, but rather an opportunity to think through novel, expansive interventions into this cyborg loop. We may be thinking (and feeling, and experiencing) differently than we used to, but this remains a fact of human existence. Digital media has shifted the ontics of our technogenetic reality, but it has not fundamentally altered its ontology. Morphological biology, in fact, entails ontological stability. To be human, and to think like one, is to be with machines, and to think with them. The kids, in other words, are all right.

    This sort of quasi-Derridean or Stieglerian Hegelianism is obviously not uncommon in media theory. As Hayles deploys it, this disposition provides a powerful framework for thinking through the relationship of humans and machines without ontological reductivism on either end. Moreover, she engages this theory in a resolutely material fashion, evading the enervating tendency of many theorists in the humanities to reduce actually existing material processes to metaphor and semiosis. Her engagement with Malabou’s work on brain plasticity is particularly useful here. Malabou has argued that the choice facing the intellectual in the age of contemporary capitalism is between plasticity and self-fashioning. Plasticity is a quintessential demand of contemporary capitalism, whereas self-fashioning opens up radical possibilities for intervention. The distinction between these two potentialities, however, is unclear – and therefore demands an ideological commitment to the latter. Hayles is right to point out that this dialectic insufficiently accounts for the myriad ways in which we are engaged with media, and are in fact produced, bodily, by it.

    But while Hayles’ critique is compelling, the responses she posits may be less so. Against what she sees as Malabou’s snide rejection of the potential of media, she argues

    It is precisely because contemporary technogenesis posits a strong connection between ongoing dynamic adaptation of technics and humans that multiple points of intervention open up. These include making new media…adapting present media to subversive ends…using digital media to reenvision academic practices, environments and strategies…and crafting reflexive representations of media self fashionings…that call attention to their own status as media, in the process raising our awareness of both the possibilities and dangers of such self-fashioning. (83)

    With the exception of the ambiguous labor done by the word “subversive,” this reads like a catalog of demands made by administrators seeking to offload ever-greater numbers of students into MOOCs. This is unfortunately indicative of what is, throughout the book, a basic failure to engage with the political economics of “digital media and contemporary technogenesis.” Not every book must explicitly be political, and there is little more ponderous than the obligatory, token consideration of “the political” that so many media scholars feel compelled to make. And yet, this is a text that claims to explain “how” “we” “think” under post-industrial, cognitive capitalism, and so the lack of this engagement cannot help but show.

    Universities across the country are collapsing due to lack of funding, students are practically reduced to debt bondage to cope with the costs of a desperately near-compulsory higher education that fails to deliver economic promises, “disruptive” deployment of digital media has conjured teratic corporate behemoths that all presume to “make the world a better place” on the backs of extraordinarily exploited workforces. There is no way for an account of the relationship between the human and the digital in this capitalist context not to be political. Given the general failure of the book to take these issues seriously, it is unsurprising that two of Hayles’ central suggestions for addressing the crisis in the humanities are 1) to use voluntary, hobbyist labor to do the intensive research that will serve as the data pool for digital humanities scholars and 2) to increasingly develop University partnerships with major digital conglomerates like Google.

    This reads like a cost-cutting administrator’s fever dream because, in the chapter in which Hayles promulgates novel (one might say “disruptive”) ideas for how best to move the humanities forward, she only speaks to administrators. There is no consideration of labor in this call for the reformation of the humanities. Given the enormous amount of writing that has been done on affective capitalism (Clough 2008), digital labor (Scholz 2012), emotional labor (Van Kleaf 2015), and so many other iterations of exploitation under digital capitalism, it boggles the mind a bit to see an embrace of the Mechanical Turk as a model for the future university.

    While it may be true that humanities education is in crisis – that it lacks funding, that its methods don’t connect with students, that it increasingly must justify its existence on economic grounds – it is unclear that any of these aspects of the crisis are attributable to a lack of engagement with the potentials of digital media, or the recognition that humans are evolving with our computers. All of these crises are just as plausibly attributable to what, among many others, Chandra Mohanty identified ten years ago as the emergence of the corporate university, and the concomitant transformation of the mission of the university from one of fostering democratic discourse to one of maximizing capital (Mohanty 2003). In other words, we might as easily attribute the crisis to the tightening command that contemporary capitalist institutions have over the logic of the university.

    Humanities departments are underfunded precisely because they cannot – almost by definition – justify their existence on monetary grounds. When students are not only acculturated, but are compelled by financial realities and debt, to understand the university as a credentialing institution capable of guaranteeing certain baseline waged occupations – then it is no surprise that they are uninterested in “close reading” of texts. Or, rather, it might be true that students’ “hyperreading” is a consequence of their cognitive evolution with machines. But it is also just as plausibly a consequence of the fact that students often are working full time jobs while taking on full time (or more) course loads. They do not have the time or inclination to read long, difficult texts closely. They do not have the time or inclination because of the consolidating paradigm around what labor, and particularly their labor, is worth. Why pay for a researcher when you can get a hobbyist to do it for free? Why pay for a humanities line when Google and Wikipedia can deliver everything an institution might need to know?

    In a political economy in which Amazon’s reduction of human employees to algorithmically-managed meat wagons is increasingly diagrammatic and “innovative” in industries from service to criminal justice to education, the proposals Hayles is making to ensure the future of the university seem more fifth columnary that emancipatory.

    This stance also evacuates much-needed context from what are otherwise thoroughly interesting, well-crafted arguments. This is particularly true of How We Think’s engagement with Lev Manovich’s claims regarding narrative and database. Speaking reductively, in The Language of New Media (MIT Press, 2001), Manovich argued that under there are two major communicative forms: narrative and database. Narrative, in his telling, is more or less linear, and dependent on human agency to be sensible. Novels and films, despite many modernist efforts to subvert this, tend toward narrative. The database, as opposed to the narrative, arranges information according to patterns, and does not depend on a diachronic point-to-point communicative flow to be intelligible. Rather, the database exists in multiple temporalities, with the accumulation of data for rhizomatic recall of seemingly unrelated information producing improbable patterns of knowledge production. Historically, he argues, narrative has dominated. But with the increasing digitization of cultural output, the database will more and more replace narrative.

    Manovich’s dichotomy of media has been both influential and roundly criticized (not least by Manovich himself in Software Takes Command, Bloomsbury 2013) Hayles convincingly takes it to task for being reductive and instituting a teleology of cultural forms that isn’t borne out by cultural practice. Narrative, obviously, hasn’t gone anywhere. Hayles extends this critique by considering the distinctive ways space and time are mobilized by database and narrative formations. Databases, she argues, depend on interoperability between different software platforms that need to access the stored information. In the case of geographical information services and global positioning services, this interoperability depends on some sort of universal standard against which all information can be measured. Thus, Cartesian space and time are inevitably inserted into database logics, depriving them of the capacity for liveliness. That is to say that the need to standardize the units that measure space and time in machine-readable databases imposes a conceptual grid on the world that is creatively limiting. Narrative, on the other hand, does not depend on interoperability, and therefore does not have an absolute referent against which it must make itself intelligible. Given this, it is capable of complex and variegated temporalities not available to databases. Databases, she concludes, can only operate within spatial parameters, while narrative can represent time in different, more creative ways.

    As an expansion and corrective to Manovich, this argument is compelling. Displacing his teleology and infusing it with a critique of the spatio-temporal work of database technologies and their organization of cultural knowledge is crucial. Hayles bases her claim on a detailed and fascinating comparison between the coding requirements of relational databanks and object-oriented databanks. But, somewhat surprisingly, she takes these different programming language models and metonymizes them as social realities. Temporality in the construction of objects transmutes into temporality as a philosophical category. It’s unclear how this leap holds without an attendant sociopolitical critique. But it is impossible to talk about the cultural logic of computation without talking about the social context in which this computation emerges. In other words, it is absolutely true that the “spatializing” techniques of coders (like clustering) render data points as spatial within the context of the data bank. But it is not an immediately logical leap to then claim that therefore databases as a cultural form are spatial and not temporal.

    Further, in the context of contemporary data science, Hayles’s claims about interoperability are at least somewhat puzzling. Interoperability and standardized referents might be a theoretical necessity for databases to be useful, but the ever-inflating markets around “big data,” data analytics, insights, overcoming data siloing, edge computing, etc, demonstrate quite categorically that interoperability-in-general is not only non-existent, but is productively non-existent. That is to say, there are enormous industries that have developed precisely around efforts to synthesize information generated and stored across non-interoperable datasets. Moreover, data analytics companies provide insights almost entirely based on their capacity to track improbably data patterns and resonances across unlikely temporalities.

    Far from a Cartesian world of absolute space and time, contemporary data science is a quite posthuman enterprise in committing machine learning to stretch, bend and strobe space and time in order to generate the possibility of bankable information. This is both theoretically true in the sense of setting algorithms to work sorting, sifting and analyzing truly incomprehensible amounts of data and materially true in the sense of the massive amount of capital and labor that is invested in building, powering, cooling, staffing and securing data centers. Moreover, the amount of data “in the cloud” has become so massive that analytics companies have quite literally reterritorialized information– particularly trades specializing in high frequency trading, which practice “co- location,” locating data centers geographically closer   the sites from which they will be accessed in order to maximize processing speed.

    Data science functions much like financial derivatives do (Martin 2015). Value in the present is hedged against the probable future spatiotemporal organization of software and material infrastructures capable of rendering a possibly profitable bundling of information in the immediate future. That may not be narrative, but it is certainly temporal. It is a temporality spurred by the queer fluxes of capital.

    All of which circles back to the title of the book. Hayles sets out to explain How We Think. A scholar with such an impeccable track record for pathbreaking analyses of the relationship of the human to technology is setting a high bar for herself with such a goal. In an era in which (in no small part due to her work) it is increasingly unclear who we are, what thinking is or how it happens, it may be an impossible bar to meet. Hayles does an admirable job of trying to inject new paradigms into a narrow academic debate about the future of the humanities. Ultimately, however, there is more resting on the question than the book can account for, not least the livelihoods and futures of her current and future colleagues.
    _____

    R Joshua Scannell is a PhD candidate in sociology at the CUNY Graduate Center. His current research looks at the political economic relations between predictive policing programs and urban informatics systems in New York City. He is the author of Cities: Unauthorized Resistance and Uncertain Sovereignty in the Urban World (Paradigm/Routledge, 2012).

    Back to the essay
    _____

    Patricia T. Clough. 2008. “The Affective Turn.” Theory Culture and Society 25(1) 1-22

    N. Katherine Hayles. 2002. Writing Machines. Cambridge: MIT Press

    N. Katherine Hayles. 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press

    Catherine Malabou. 2008. What Should We Do with Our Brain? New York: Fordham University Press

    Lev Manovich. 2001. The Language of New Media. Cambridge: MIT Press.

    Lev Manovich. 2009. Software Takes Command. London: Bloomsbury

    Randy Martin. 2015. Knowledge LTD: Toward a Social Logic of the Derivative. Philadelphia: Temple University Press

    Chandra Mohanty. 2003. Feminism Without Borders: Decolonizing Theory, Practicing Solidarity. Durham: Duke University Press.

    Trebor Scholz, ed. 2012. Digital Labor: The Internet as Playground and Factory. New York: Routledge

    Bernard Stiegler. 1998. Technics and Time, 1: The Fault of Epimetheus. Palo Alto: Stanford University Press

    Kara Van Cleaf. 2015. “Of Woman Born to Mommy Blogged: The Journey from the Personal as Political to the Personal as Commodity.” Women’s Studies Quarterly 43(3/4) 247-265

    Back to the essay