boundary 2

Tag: gender

  • David Gerard — Creationism on the Blockchain (review of George Gilder, Life After Google)

    David Gerard — Creationism on the Blockchain (review of George Gilder, Life After Google)

    a review of George Gilder, Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (Regnery, 2018)

    by David Gerard

    George Gilder is most famous as a conservative author and speechwriter. He also knows his stuff about technology, and has a few things to say.

    But what he has to say about blockchain in his book Life After Google is rambling, ill-connected and unconvincing — and falls prey to the fixed points in his thinking.

    Gilder predicts that the Google and Silicon Valley approach — big data, machine learning, artificial intelligence, not charging users per transaction — is failing to scale, and will collapse under its own contradictions.

    The Silicon Valley giants will be replaced by a world built around cryptocurrency, blockchains, sound money … and the obsolescence of philosophical materialism — the theory that thought and consciousness needs only physical reality. That last one turns out to be Gilder’s main point.

    At his best, as in his 1990 book Life After Television, Gilder explains consequences following from historical materialism — Marx and Engels’ theory that historical events emerge from economic developments and changes to the mode of production — to a conservative readership enamoured with the obsolete Great Man theory of history.

    (That said, Gilder sure does love his Great Men. Men specifically.)

    Life After Google purports to be about material forces that follow directly from technology. Gilder then mixes in his religious beliefs as, literally, claims about mathematics.

    Gilder has a vastly better understanding of technology than most pop science writers. If Gilder talks tech, you should listen. He did a heck of a lot of work on getting out there and talking to experts for this book.

    But Gilder never quite makes his case that blockchains are the solutions to the problems he presents — he just presents the existence of blockchains, then talks as if they’ll obviously solve everything.

    Blockchains promise Gilder comfort in certainty: “The new era will move beyond Markov chains of disconnected probabilistic states to blockchain hashes of history and futurity, trust and truth,” apparently.

    The book was recommended to me by a conservative friend, who sent me a link to an interview with Gilder on the Hoover Institution’s Uncommon Knowledge podcast. My first thought was “another sad victim of blockchain white papers.” You see this a lot — people tremendously excited by blockchain’s fabulous promises, with no idea that none of this stuff works or can work.

    Gilder’s particular errors are more interesting. And — given his real technical expertise — less forgivable.

    Despite its many structural issues — the book seems to have been left in dire need of proper editing — Life After Google was a hit with conservatives. Peter Thiel is a noteworthy fan. So we may need to pay attention. Fortunately, I’ve read it so you don’t have to.

    About the Author

    Gilder is fêted in conservative circles. His 1981 book Wealth and Poverty was a favourite of supply-side economics proponents in the Reagan era. He owned conservative magazine The American Spectator from 2000 to 2002.

    Gilder is frequently claimed to have been Ronald Reagan’s favourite living author — mainly in his own publicity: “According to a study of presidential speeches, Mr. Gilder was President Reagan’s most frequently quoted living author.”

    I tried tracking down this claim — and all citations I could find trace back to just one article: “The Gilder Effect” by Larissa MacFarquhar, in The New Yorker, 29 May 2000.

    The claim is one sentence in passing: “It is no accident that Gilder — scourge of feminists, unrepentant supply-sider, and now, at sixty, a technology prophet — was the living author Reagan most often quoted.” The claim isn’t substantiated further in the New Yorker article — it reads like the journalist was told this and just put it in for colour.

    Gilder despises feminism, and has described himself as “America’s number-one antifeminist.” He has written two books — Sexual Suicide, updated as Men and Marriage, and Naked Nomads — on this topic alone.

    Also, per Gilder, Native American culture collapsed because it’s “a corrupt and unsuccessful culture,” as is Black culture — and not because of, e.g., massive systemic racism.

    Gilder believes the biological theory of evolution is wrong. He co-founded the Discovery Institute in 1990, as an offshoot of the Hudson Institute. The Discovery Institute started out with papers on economic issues, but rapidly pivoted to promoting “intelligent design” — the claim that all living creatures were designed by “a rational agent,” and not evolved through natural processes. It’s a fancy term for creationism.

    Gilder insisted for years that the Discovery Institute’s promotion of intelligent design totally wasn’t religious — even as judges ruled that intelligent design in schools was promotion of religion. Unfortunately for Gilder, we have the smoking gun documents showing that the Discovery Institute was explicitly trying to push religion into schools — the leaked Wedge Strategy document literally says: “Design theory promises to reverse the stifling dominance of the materialist worldview, and to replace it with a science consonant with Christian and theistic convictions.”

    Gilder’s politics are approximately the polar opposite of mine. But the problems I had with Life After Google are problems his fans have also had. Real Clear Marketsreview is a typical example — it’s from the conservative media sphere and written by a huge Gilder fan, and he’s very disappointed at how badly the book makes its case for blockchain.

    Gilder’s still worth taking seriously on tech, because he’s got a past record of insight — particularly his 1990s books Life After Television and Telecosm.

    Life After Television

    Life After Television: The Coming Transformation of Media and American Life is why people take Gilder seriously as a technology pundit. First published in 1990, it was expanded in 1992 and again in 1994.

    The book predicts television’s replacement with computers on networks — the downfall of the top-down system of television broadcasting and the cultural hegemony it implies. “A new age of individualism is coming, and it will bring an eruption of culture unprecedented in human history.” Gilder does pretty well — his 1990 vision of working from home is a snapshot of 2020, complete with your boss on Zoom.

    You could say this was obvious to anyone paying attention — Gilder’s thesis rests on technology that had already shown itself capable of supporting the future he spelt out — but not a lot of people in the mainstream were paying attention, and the industry was in blank denial. Even Wired, a few years later, was mostly still just terribly excited that the Internet was coming at all.

    Life After Television talks way more about the fall of the television industry than the coming future network. In the present decade, it’s best read as a historical record of past visions of the astounding future.

    If you remember the first two or three years of Wired magazine, that’s the world Gilder’s writing from. Gilder mentored Wired and executive editor Kevin Kelly in its first few years, and appeared on the cover of the March 1996 edition. Journalist and author Paulina Borsook detailed Gilder’s involvement in Wired in her classic 2000 book Cyberselfish: A Critical Romp through the Terribly Libertarian Culture of High Tech, (also see an earlier article of the same name in Mother Jones) which critiques his politics including his gender politics at length, noting that “Gilder worshipped entrepreneurs and inventors and appeared to have found God in a microchip” (132-3) and describing “a phallus worship he has in common with Ayn Rand” (143).

    The only issue I have with Gilder’s cultural predictions in Life After Television is that he doesn’t mention the future network’s negative side-effects — which is a glaring miss in a world where E. M. Forster predicted social media and some of its effects in The Machine Stops in 1909.

    The 1994 edition of Life After Television goes in quite a bit harder than the 1990 edition. The book doesn’t say “Internet,” doesn’t mention the Linux computer operating system — which was already starting to be a game-changer — and only says “worldwide web” in the sense of “the global ganglion of computers and cables, the new worldwide web of glass and light.” (p23) But then there’s the occasional blinder of a paragraph, such as his famous prediction of the iPhone and its descendants:

    Indeed, the most common personal computer of the next decade will be a digital cellular phone. Called personal digital assistants, among many other coinages, they will be as portable as a watch and as personal as a wallet; they will recognise speech and navigate streets, open the door and start the car, collect the mail and the news and the paycheck, connecting to thousands of databases of all kinds. (p20)

    Gilder’s 1996 followup Telecosm is about what unlimited bandwidth would mean. It came just in time for a minor bubble in telecom stocks, because the Internet was just getting popular. Gilder made quite a bit of money in stock-picking, and so did subscribers to his newsletter — everyone’s a financial genius in a bubble. Then that bubble popped, and Gilder and his subscribers lost their shirts. But his main error was just being years early.

    So if Gilder talks tech, he’s worth paying attention to. Is he right, wrong, or just early?

    Gilder, Bitcoin and Gold

    Gilder used to publish through larger generalist publishers. But since around 2000, he’s published through small conservative presses such as Regnery, small conservative think tanks, or his own Discovery Institute. Regnery, the publisher of Life After Google, is functionally a vanity press for the US far right, famous for, among other things, promising to publish a book by US Senator Josh Hawley after Simon & Schuster dropped it due to Hawley’s involvement with the January 6th capital insurrection.

    Gilder caught on to Bitcoin around 2014. He told Reason that Bitcoin was “the perfect libertarian solution to the money enigma.”

    In 2015, his monograph The 21st Century Case for Gold: A New Information Theory of Money was published by the American Principles Project — a pro-religious conservative think tank that advocates a gold standard and “hard money.”

    This earlier book uses Bitcoin as a source of reasons that an economy based on gold could work in the 21st century:

    Researches in Bitcoin and other digital currencies have shown that the real source of the value of any money is its authenticity and reliability as a measuring stick of economic activity. A measuring stick cannot be part of what it measures. The theorists of Bitcoin explicitly tied its value to the passage of time, which proceeds relentlessly beyond the reach of central banks.

    Gilder drops ideas and catch-phrases from The 21st Century Case for Gold all through Life After Google without explaining himself — he just seems to assume you’re fully up on the Gilder Cinematic Universe. An editor should have caught this — a book needs to work as a stand-alone.

    Life After Google’s Theses

    The theses of Life After Google are:

    • Google and Silicon Valley’s hegemony is bad.
    • Google and Silicon Valley do capitalism wrong, and this is why they will collapse from their internal contradictions.
    • Blockchain will solve the problems with Silicon Valley.
    • Artificial intelligence is impossible, because Gödel, Turing and Shannon proved mathematically that creativity cannot result without human consciousness that comes from God.

    This last claim is the real point of the book. Gilder affirmed that this was the book’s point in an interview with WND.

    I should note, by the way, that Gödel, Turing and Shannon proved nothing of the sort. Gilder claims repeatedly that they and other mathematicians did, however.

    Marxism for Billionaires

    Gilder’s objections to Silicon Valley were reasonably mainstream and obvious by 2018. They don’t go much beyond what Clifford Stoll said in Silicon Snake Oil in 1995. And Stoll was speaking to his fellow insiders. (Gilder cites Stoll, though he calls him “Ira Stoll.”) But Gilder finds the points still worth making to his conservative audience, as in this early 2018 Forbes interview:

    A lot of people have an incredible longing to reduce human intelligence to some measurable crystallization that can be grasped, calculated, projected and mechanized. I think this is a different dimension of the kind of Silicon Valley delusion that I describe in my upcoming book.

    Gilder’s scepticism of Silicon Valley is quite reasonable … though he describes Silicon Valley as having adopted “what can best be described as a neo-Marxist political ideology and technological vision.”

    There is no thing, no school of thought, that is properly denoted “neo-Marxism.” In the wild, it’s usually a catch-all for everything the speaker doesn’t like. It’s a boo-word.

    Gilder probably realises that it comes across as inane to label the ridiculously successful billionaire and near-trillionaire capitalists of the present day as any form of “Marxist.” He attempts to justify his usage:

    Marx’s essential tenet was that in the future, the key problem of economics would become not production amid scarcity but redistribution of abundance.

    That’s not really regarded as the key defining point of Marxism by anyone else anywhere. (Maybe Elon Musk, when he’s tweeting words he hasn’t looked up.) I expect the libertarian post-scarcity transhumanists of the Bay Area, heavily funded by Gilder’s friend Peter Thiel, would be disconcerted too.

    “Neo-Marxism” doesn’t rate further mention in the book — though Gilder does use the term in the Uncommon Knowledge podcast interview. Y’know, there’s red-baiting to get in.

    So — Silicon Valley’s “neo-marxism” sucks. “It is time for a new information architecture for a globally distributed economy. Fortunately, it is on its way.” Can you guess what it is?

    You’re Doing Capitalism Wrong

    Did you know that Isaac Newton was the first Austrian economist? I didn’t. (I still don’t.)

    Gilder doesn’t say this outright. He does speak of Newton’s work in physics, as a “system of the world,” a phrase he confesses to having lifted from Neal Stephenson.

    But Gilder is most interested in Newton’s work as Master of the Mint — “Newton’s biographers typically underestimate his achievement in establishing the information theory of money on a firm foundation.”

    There is no such thing as “the information theory of money” — this is a Gilder coinage from his 2015 book The 21st Century Case for Gold.

    Gilder’s economic ideas aren’t quite Austrian economics, but he’s fond of their jargon, and remains a huge fan of gold:

    The failure of his alchemy gave him — and the world — precious knowledge that no rival state or private bank, wielding whatever philosopher’s stone, would succeed in making a better money. For two hundred years, beginning with Newton’s appointment to the Royal Mint in 1696, the pound, based on the chemical irreversibility of gold, was a stable and reliable monetary Polaris.

    I’m pretty sure this is not how it happened, and that the ascendancy of Great Britain’s pound sterling had everything to do with it being backed by a world-spanning empire, and not any other factor. But Gilder goes one better:

    Fortunately the lineaments of a new system of the world have emerged. It could be said to have been born in early September 1930, when a gold-based Reichsmark was beginning to subdue the gales of hyperinflation that had ravaged Germany since the mid-1920s.

    I am unconvinced that this quite explains Germany in the 1930s. The name of an obvious and well-known political figure, who pretty much everyone else considers quite important in discussing Germany in the 1930s, is not mentioned in this book.

    The rest of the chapter is a puréed slurry of physics, some actual information theory, a lot of alleged information theory, and Austrian economics jargon, giving the impression that these are all the same thing as far as Gilder is concerned.

    Gilder describes what he thinks is Google’s “System of the World” — “The Google theory of knowledge, nicknamed ‘big data,’ is as radical as Newton’s and as intimidating as Newton’s was liberating.” There’s an “AI priesthood” too.

    A lot of people were concerned early on about Google-like data sponges. Here’s Gilder on the forces at play:

    Google’s idea of progress stems from its technological vision. Newton and his fellows, inspired by their Judeo-Christian world view, unleashed a theory of progress with human creativity and free will at its core. Google must demur.

    … Finally, Google proposes, and must propose, an economic standard, a theory of money and value, of transactions and the information they convey, radically opposed to what Newton wrought by giving the world a reliable gold standard.

    So Google’s failures include not proposing a gold standard, or perhaps the opposite.

    Open source software is also part of this evil Silicon Valley plot — the very concept of open source. Because you don’t pay for each copy. Google is evil for participating in “a cult of the commons (rooted in ‘open source’ software)”.

    I can’t find anywhere that Gilder has commented on Richard M. Stallman’s promotion of Free Software, of which “open source” was a business-friendly politics-washed rebranding — but I expect that if he had, the explosion would have been visible from space.

    Gilder’s real problem with Google is how the company conducts its capitalism — how it applies creativity to the goal of actually making money. He seems to consider the successful billionaires of our age “neo-Marxist” because they don’t do capitalism the way he thinks they should.

    I’m reminded of Bitcoin Austrians — Saifedean Ammous in The Bitcoin Standard is a good example — who argue with the behaviour of the real-life markets, when said markets are so rude as not to follow the script in their heads. Bitcoin maximalists regard Bitcoin as qualitatively unique, unable to be treated in any way like the hodgepodge of other things called “cryptos,” and a separate market of its own.

    But the real-life crypto markets treat this as all one big pile of stuff, and trade it all on much the same basis. The market does not care about your ideology, only its own.

    Gilder mixes up his issues with the Silicon Valley ideology — the Californian Ideology, or cyberlibertarianism, as it’s variously termed in academia — with a visceral hatred of capitalists who don’t do capitalism his way. He seems to despise the capitalists who don’t do it his way more than he despises people who don’t do capitalism at all.

    (Gilder was co-author of the 1994 document “Magna Carta for the Knowledge Age” that spurred Langdon Winner to come up with the term “cyberlibertarianism” in the first place.)

    Burning Man is bad because it’s a “commons cult” too. Gilder seems to be partially mapping out the Californian Ideology from the other side.

    Gilder is outraged by Google’s lack of attention to security, in multiple senses of the word — customer security, software security, military security. Blockchain will fix all of this — somehow. It just does, okay?

    Ads are apparently dying. Google runs on ads — but they’re on their way out. People looking to buy things search on Amazon itself first, then purchase things for money — in the proper businesslike manner.

    Gilder doesn’t mention the sizable share of Amazon’s 2018 income that came from sales of advertising on its own platform. Nor does Gilder mention that Amazon’s entire general store business, which he approves of, posted huge losses in 2018, and was subsidised by Amazon’s cash-positive business line, the Amazon Web Services computing cloud.

    Gilder visits Google’s data centre in The Dalles, Oregon. He notes that Google embodies Sun Microsystems’ old slogan “The Network is the Computer,” coined by John Gage of Sun in 1984 — though Gilder attributes this insight to Eric Schmidt, later of Google, based on an email that Schmidt sent Gilder when he was at Sun in 1993.

    All successful technologies develop on an S-curve, a sigmoid function. They take off, raise in what looks like exponential growth … and then they level off. This is normal and expected. Gilder knows this. Correctly calling the levelling-off stage is good and useful tech punditry.

    Gilder notes the siren call temptations of having vastly more computing power than anyone else — then claims that Google will therefore surely fail. Nothing lasts forever; but Gilder doesn’t make the case for his claimed reasons.

    Gilder details Google’s scaling problems at length — but at no point addresses blockchains’ scaling problems: a blockchain open to all participants can’t scale and stay fast and secure (the “blockchain trilemma”). I have no idea how he missed this one. If he could see that Google has scaling problems, how could he not even mention that public blockchains have scaling problems?

    Gilder has the technical knowledge to be able to understand this is a key question, ask it and answer it. But he just doesn’t.

    How would a blockchain system do the jobs presently done by the large companies he’s talking about? What makes Amazon good when Google is bad? The mere act of selling goods? Gilder resorts entirely to extrapolation from axioms, and never bothers with the step where you’d expect him to compare his results to the real world. Why would any of this work?

    Gilder is fascinated by the use of Markov chains to statistically predict the next element of a series: “By every measure, the most widespread, immense, and influential of Markov chains today is Google’s foundational algorithm, PageRank, which encompasses the petabyte reaches of the entire World Wide Web.”

    Gilder interviews Robert Mercer — the billionaire whose Mercer Family Foundation helped bankroll Trump, Bannon, Brexit, and those parts of the alt-right that Peter Thiel didn’t fund.

    Mercer started as a computer scientist. He made his money on Markov-related algorithms for financial trading — automating tiny trades that made no human sense, only statistical sense.

    This offends Gilder’s sensibilities:

    This is the financial counterpart of Markov models at Google translating languages with no knowledge of them. Believing as I do in the centrality of knowledge and learning in capitalism, I found this fact of life and leverage absurd. If no new knowledge was generated, no real wealth was created. As Peter Drucker said, ‘It is less important to do things right than to do the right things.’

    Gilder is faced with a stupendously successful man, whose ideologies he largely concurs with, and who’s won hugely at capitalism — “Mercer and his consort of superstar scholars have, mutatis mutandis, excelled everyone else in the history of finance” — but in a way that is jarringly at odds with his own deeply-held beliefs.

    Gilder believes Mercer’s system, like Google’s, “is based on big data that will face diminishing returns. It is founded on frequencies of trading that fail to correspond to any real economic activity.”

    Gilder holds that it’s significant that Mercer’s model can’t last forever. But this is hardly a revelation — nothing lasts forever, and especially not an edge in the market. It’s the curse of hedge funds that any process that exploits inefficiencies will run out of other people’s inefficiencies in a few years, as the rest of the market catches on. Gilder doesn’t make the case that Mercer’s trick will fail any faster than it would be expected to just by being an edge in a market.

    Ten Laws of the Cryptocosm

    Chapter 5 is “Ten Laws of the Cryptocosm”. These aren’t from anywhere else — Gilder just made them up for this book.

    “Cryptocosm” is a variant on Gilder’s earlier coinage “Telecosm,” the title of his 1996 book.

    Blockchain spectators should be able to spot the magical foreshadowing term in rule four:

    The fourth rule is “Nothing is free. This rule is fundamental to human dignity and worth. Capitalism requires companies to serve their customers and to accept their proof of work, which is money. Banishing money, companies devalue their customers.

    Rules six and nine are straight out of The Bitcoin Standard:

    The sixth rule: ‘Stable money endows humans with dignity and control.’ Stable money reflects the scarcity of time. Without stable money, an economy is governed only by time and power.

    The ninth rule is ‘Private keys are held by individual human beings, not by governments or Google.’ … Ownership of private keys distributes power.

    In a later chapter, Gilder critiques The Bitcoin Standard, which he broadly approves of.

    Gödel’s Incompetence Theorem

    Purveyors of pseudoscience frequently drop the word “quantum” or “chaos theory” to back their woo-mongering in areas that aren’t physics or mathematics. There’s a strain of doing the same thing with Gödel’s incompleteness theorems to make remarkable claims in areas that aren’t maths.

    What Kurt Gödel actually said was that if you use logic to build your mathematical theorems, you have a simple choice: either your system is incomplete, meaning you can’t prove every statement that is true, and you can’t know which of the unproven statements are true — or you introduce internal contradictions. So you can have holes in your maths, or you can be wrong.

    Gödel’s incompleteness theorems had a huge impact on the philosophy of mathematics. They seriously affected Bertrand Russell’s work on the logicism programme, to model all of mathematics as formal logic, and caused issues for Hilbert’s second problem, which sought a proof that arithmetic is consistent — that is, free of any internal contradictions.

    It’s important to note that Gödel’s theorems only apply in a particular technical sense, to particular very specific mathematical constructs. All the words are mathematical jargon, and not English.

    But humans have never been able to resist a good metaphor — so, as with quantum physics, chaos theory and Turing completeness, people seized upon “Gödel” and ran off in all directions.

    One particular fascination was what the theorems meant for the idea of philosophical materialism — whether interesting creatures like humans could really be completely explained by ordinary mathematics-based physics, or if there was something more in there. Gödel himself essayed haltingly in the direction of saying he thought there might be more than physics there — though he was slightly constrained by knowing what the mathematics actually said.

    Compare the metaphor abuse surrounding blockchains. Deploy a mundane data structure and a proof-of-work system to determine who adds the next bit of data, and thus provide technically-defined, constrained and limited versions of “trustlessness,” “irreversibility” and “decentralisation.” People saw these words, and attributed their favoured shade of meaning of the plain-language words to anything even roughly descended from the mundane data structure — or that claimed it would be descended from it some time in the future.

    Gilder takes Gödel’s incompleteness theorems, adds Claude Shannon on information theory, and mixes in his own religious views. He asserts that the mathematics of Shannon’s information theory and Gödel’s incompleteness theorems prove that creativity can only come from a human consciousness, created by God. Therefore, artificial intelligence is impossible.

    This startling conclusion isn’t generally accepted. Torkel Franzén’s excellent Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse, chapter 4, spends several pages bludgeoning variations on this dumb and bad idea to death:

    there is no such thing as the formally defined language, the axioms, and the rules of inference of “human thought,” and so it makes no sense to speak of applying the incompleteness theorem to human thought.

    If something is not literally a mathematical “formal system,” Gödel doesn’t apply to it.

    The free Google searches and the fiat currencies are side issues — what Gilder really loathes is the very concept of artificial intelligence. It offends him.

    Gilder leans heavily on the ideas of Gregory Chaitin — one of the few mathematicians with a track record of achievement in information theory who also buys into the idea that Gödel’s incompleteness theorem may disprove philosophical materialism. Of the few people convinced by Chaitin’s arguments, most happen to have matching religious beliefs.

    It’s one thing to evaluate technologies according to an ethical framework informed by your religion. It’s quite another to make technological pronouncements directly from your religious views, and to claim mathematical backing for your religious views.

    Your Plastic Pal Who’s Fun to Be With

    Chapter 7 talks about artificial intelligence, and throwing hardware at the problem of machine learning. But it’s really about Gilder’s loathing of the notion of a general artificial intelligence that would be meaningfully comparable to a human being.

    The term “artificial intelligence” has never denoted any particular technology — it’s the compelling science-fictional vision of your plastic pal who’s fun to be with, especially when he’s your unpaid employee. This image has been used through the past few decades to market a wide range of systems that do a small amount of the work a human might otherwise do.

    But throughout Life After Google, Gilder conflates the hypothetical concept of human-equivalent general artificial intelligence with the statistical machine learning products that are presently marketed as “artificial intelligence.”

    Gilder’s next book, Gaming AI: Why AI Can’t Think but Can Transform Jobs (Discovery Institute, 2020), confuses the two somewhat less — but still hammers on his completely wrong ideas about Gödel.

    Gilder ends the chapter with three paragraphs setting out the book’s core thesis:

    The current generation in Silicon Valley has yet to come to terms with the findings of von Neumann and Gödel early in the last century or with the breakthroughs in information theory of Claude Shannon, Gregory Chaitin, Anton Kolmogorov, and John R. Pierce. In a series of powerful arguments, Chaitin, the inventor of algorithmic information theory, has translated Gödel into modern terms. When Silicon Valley’s AI theorists push the logic of their case to explosive extremes, they defy the most crucial findings of twentieth-century mathematics and computer science. All logical schemes are incomplete and depend on propositions that they cannot prove. Pushing any logical or mathematical argument to extremes — whether ‘renormalized’ infinities or parallel universe multiplicities — scientists impel it off the cliffs of Gödelian incompleteness.

    Chaitin’s ‘mathematics of creativity’ suggests that in order to push the technology forward it will be necessary to transcend the deterministic mathematical logic that pervades existing computers. Anything deterministic prohibits the very surprises that define information and reflect real creation. Gödel dictates a mathematics of creativity.

    This mathematics will first encounter a major obstacle in the stunning successes of the prevailing system of the world not only in Silicon Valley but also in finance.

    There’s a lot to unpack here. (That’s an academic jargon phrase meaning “yikes!”) But fundamentally, Gilder believes that Gödel’s incompleteness theorems mean that artificial intelligence can’t come up with true creativity. Because Gilder is a creationist.

    The only place I can find Chaitin using a phrase akin to “mathematics of creativity” is in his 2012 book of intelligent design advocacy, Proving Darwin: Making Biology Mathematical, which Gilder cites. Chaitin writes:

    To repeat: Life is plastic, creative! How can we build this out of static, perfect mathematics? We shall use postmodern math, the mathematics that comes after Gödel, 1931, and Turing, 1936, open not closed math, the math of creativity, in fact.

    Whenever you see Gilder talk about “information theory,” remember that he’s using the special creationist sense of the term — a claim that biological complexity without God pushing it along would require new information being added, and that this is impossible.

    Real information theory doesn’t say anything of the sort — the creationist version is a made-up pseudotheory, developed at the Discovery Institute. It’s the abuse of a scientific metaphor to claim that a loose analogy from an unrelated field is a solid scientific claim.

    Gilder’s doing the thing that bitcoiners, anarchocapitalists and neoreactionaries do — where they ask a lot of the right questions, but come up with answers that are completely on crack, based on abuse of theories that they didn’t bother understanding.

    Chapter 9 is about libertarian transhumanists of the LessWrong tendency, at the 2017 Future Of Life conference on hypothetical future artificial intelligences, hosted by physicist Max Tegmark.

    Eliezer Yudkowsky, the founder of LessWrong, isn’t named or quoted, but the concerns are all reheated Yudkowsky: that a human-equivalent general artificial intelligence will have intelligence but not human values, will rapidly increase its intelligence, and thus its power, vastly beyond human levels, and so will doom us all. Therefore, we must program artificial intelligence to have human values — whatever those are.

    Yudkowsky is not a programmer, but an amateur philosopher. His charity, the Machine Intelligence Research Institute (MIRI), does no programming, and its research outputs are occasional papers in mathematics. Until recently, MIRI was funded by Peter Thiel, but it’s now substantially funded by large Ethereum holders.

    Gilder doesn’t buy Yudkowsky’s AI doomsday theory at all — he firmly believes that artificial intelligence cannot form a mind because, uh, Gödel: “The blind spot of AI is that consciousness does not emerge from thought; it is the source of it.”

    Gilder doesn’t mention that this is because, as a creationist, he believes that true intelligence lies in souls. But he does say “The materialist superstition is a strange growth in an age of information.” So this chapter turns into an exposition of creationist “information theory”:

    This materialist superstition keeps the entire Google generation from understanding mind and creation. Consciousness depends on faith—the ability to act without full knowledge and thus the ability to be surprised and to surprise. A machine by definition lacks consciousness. A machine is part of a determinist order. Lacking surprise or the ability to be surprised, it is self-contained and determined.

    That is: Gilder defines consciousness as whatever it is a machine cannot have, therefore a machine cannot achieve consciousness.

    Real science shows that the universe is a singularity and thus a creation. Creation is an entropic product of a higher consciousness echoed by human consciousness. This higher consciousness, which throughout human history we have found it convenient to call God, endows human creators with the space to originate surprising things.

    You will be unsurprised to hear that “real science” does not say anything like this. But that paragraph is the closest Gilder comes in this book to naming the creationism that drives his outlook.

    The roots of nearly a half-century of frustration reach back to the meeting in Königsberg in 1930, where von Neumann met Gödel and launched the computer age by showing that determinist mathematics could not produce creative consciousness.

    You will be further unsurprised to hear that von Neumann and Gödel never produced a work saying any such thing.

    We’re nine chapters in, a third of the way through the book, and someone from the blockchain world finally shows up — and, indeed, the first appearance of the word “blockchain” in the book at all. Vitalik Buterin, founder of Ethereum and MIRI’s largest individual donor, attends Tegmark’s AI conference: “Buterin succinctly described his company, Ethereum, launched in July 2015, as a ‘blockchain app platform.’”

    The blockchain is “an open, distributed, unhackable ledger devised in 2008 by the unknown person (or perhaps group) known as ‘Satoshi Nakamoto’ to support his cryptocurrency, bitcoin.” This is the closest Gilder comes at any point in the book to saying what a blockchain in fact is.

    Gilder says the AI guys are ignoring the power of blockchain — but they’ll get theirs, oh yes they will:

    Google and its world are looking in the wrong direction. They are actually in jeopardy, not from an all-powerful artificial intelligence, but from a distributed, peer-to-peer revolution supporting human intelligence — the blockchain and new crypto-efflorescence … Google’s security foibles and AI fantasies are unlikely to survive the onslaught of this new generation of cryptocosmic technology.

    Gilder asserts later in the book:

    They see the advance of automation, machine learning, and artificial intelligence as occupying a limited landscape of human dominance and control that ultimately will be exhausted in a robotic universe — Life 3.0. But Charles Sanders Peirce, Kurt Gödel, Alonzo Church, Alan Turing, Emil Post, and Gregory Chaitin disproved this assumption on the most fundamental level of mathematical logic itself.

    These mathematicians still didn’t do any such thing.

    Gilder’s forthcoming book Life after Capitalism (Regnery, 2022), with a 2021 National Review essay as a taster, asserts that his favoured mode of capitalism will reassert itself. Its thesis invokes Gilder’s notions of what he thinks information theory says.

    How Does Blockchain Do All This?

    Gilder has explained the present-day world, and his problems with it. The middle section of the book then goes through several blockchain-related companies and people who catch Gilder’s attention.

    It’s around here that we’d expect Gilder to start explaining what the blockchain is, how it works, and precisely how it will break the Google paradigm of big data, machine learning and artificial intelligence — the way he did when talking about the downfall of television.

    Gilder doesn’t even bother — he just starts talking about bitcoin and blockchains as Google-beaters, and carries through on the assumption that this is understood.

    But he can’t get away with this — he claims to be making a case for the successor to the Google paradigm, a technological case … and he just doesn’t ever do so.

    By the end of this section, Gilder seems to think he’s made his point clear that Google is having trouble scaling up — because they don’t charge a micro-payment for each interaction, or something — therefore various blockchain promises will win.

    The trouble with this syllogism is that the second part doesn’t follow. Gilder presents blockchain projects he thinks have potential — but that’s all. He makes the first case, and just doesn’t make the second.

    Peter Thiel Hates Universities Very Much

    Instead, let’s go to the 1517 Fund — “led by venture capitalist-hackers Danielle Strachman and Mike Gibson and partly financed by Peter Thiel.” Gilder is also a founding partner.

    Gilder is a massive Thiel fan, calling him “the master investor-philosopher Peter Thiel”:

    Thiel is the leading critic of Silicon Valley’s prevailing philosophy of ‘inevitable’ innovation. [Larry] Page, on the other hand, is a machine-learning maximalist who believes that silicon will soon outperform human beings, however you want to define the difference.

    Thiel is a fan of Gilder, and Life After Google, in turn.

    The 1517 Fund’s name comes from “another historic decentralization” — 31 October 1517 was the day that Martin Luther put up his ninety-five theses on a church door in Wittenberg.

    The 1517 team want to take down the government conspiracy of paperwork university credentials, which ties into the fiat-currency-based system of the world. Peter Thiel offers Thiel Fellowships, where he pays young geniuses not to go to college. Vitalik Buterin, founder of Ethereum, got a Thiel Fellowship.

    1517 also invests in the artificial intelligence stuff that Gilder derided in the previous section, but let’s never mind that.

    The Universidad Francisco Marroquín in Guatemala is a university for Austrian and Chicago School economics. Gilder uses UFM as a launch pad for a rant about US academia, and the 1517 Fund’s “New 95” theses about how much Thiel hates the US university system. Again: they ask some good questions, but their premises are bizarre, and their answers are on crack.

    Fictional Evidence

    Gilder rambles about author Neal Stephenson, who he’s a massive fan of. The MacGuffin of Stephenson’s 1999 novel Cryptonomicon is a cryptographic currency backed by gold. Stephenson’s REAMDE (2011) is set in a Second Life-style virtual world whose currency is based on gold, and which includes something very like Bitcoin mining:

    Like gold standards through most of human history — look it up — T’Rain’s virtual gold standard is an engine of wealth. T’Rain prospers mightily. Even though its money is metafictional, it is in fact more stable than currencies in the real world of floating exchange rates and fiat money.

    Thus, fiction proves Austrian economics correct! Because reality certainly doesn’t — which is why Ludwig von Mises repudiated empirical testing of his monetary theories early on.

    Is There Anything Bitcoin Can’t Do?

    Gilder asserts that “Bitcoin has already fostered thousands of new apps and firms and jobs.” His example is cryptocurrency mining, which is notoriously light on labour requirements. Even as of 2022, the blockchain sector employed 18,000 software developers — or 0.07% of all developers.

    “Perhaps someone should be building an ark. Or perhaps bitcoin is our ark — a new monetary covenant containing the seeds of a new system of the world.” I wonder why the story of the ark sprang to his mind.

    One chapter is a dialogue, in which Gilder speaks to an imaginary Satoshi Nakamoto, Bitcoin’s pseudonymous creator, about how makework — Bitcoin mining — can possibly create value. “Think of this as a proposed screenplay for a historic docudrama on Satoshi. It is based entirely on recorded posts by Satoshi, interlarded with pleasantries and other expedients characteristic of historical fictions.”

    Gilder fingers cryptographer Nick Szabo as the most likely candidate for Bitcoin’s pseudonymous creator, Satoshi Nakamoto — “the answer to three sophisticated textual searches that found Szabo’s prose statistically more akin to Nakomoto’s than that of any other suspected Satoshista.”

    In the blockchain world, any amazing headline that would turn the world upside-down were it true is unlikely to be true. Gilder has referenced a CoinDesk article, which references research from Aston University’s Centre for Forensic Linguistics.

    I tracked this down to an Aston University press release. The press release does not link to any research outputs — the “study” was an exercise that Jack Grieve at Aston gave his final-year students, then wrote up as a splashy bit of university press-release-ware.

    The press release doesn’t make its case either: “Furthermore, the researchers found that the bitcoin whitepaper was drafted using Latex, an open-source document preparation system. Latex is also used by Szabo for all his publications.” LaTeX is used by most computer scientists anywhere for their publications — but the Bitcoin white paper was written in OpenOffice 2.4, not LaTeX.

    This press release is still routinely used by lazy writers to claim that Szabo is Satoshi, ’cos they heard that linguistic analysis says so. Gilder could have dived an inch below the surface on this remarkable claim, and just didn’t.

    Gilder then spends a chapter on Craig Wright, who — unlike Szabo — claims to be Satoshi. This is based on Andrew O’Hagan’s lengthy biographical piece on Wright, “The Satoshi Affair” for the London Review of Books, reprinted in his book The Secret Life: Three True Stories. This is largely a launch pad for how much better Vitalik Buterin’s ideas are than Wright’s.

    Blockstack

    We’re now into a list of blockchainy companies that Gilder is impressed with. This chapter introduces Muneeb Ali and his blockchain startup, Blockstack, whose pitch is a parallel internet where you own all your data, in some unspecified sense. Sounds great!

    Ali wants a two-layer network: “monolith, the predictable carriers of the blockchain underneath, and metaverse, the inventive and surprising operations of its users above.” So, Ethereum then — a blockchain platform, with applications running on top.

    Gilder recites the press release description of Blockstack and what it can do — i.e., might hypothetically do in the astounding future.

    Under its new name, Stacks, the system is being used as a platform for CityCoins — local currencies on a blockchain — which was started in the 2021 crypto bubble. MiamiCoin notably collapsed in price a few months after its 2021 launch, and the city only didn’t show a massive loss on the cryptocurrency because Stacks bailed them out on their losses.

    Brendan Eich and Brave

    Brendan Eich is famous in the technical world as one of the key visionaries behind the Netscape web browser, the Mozilla Foundation, and the Firefox web browser, and as the inventor of the JavaScript programming language.

    Eich is most famous in the non-technical world for his 2008 donation to Proposition 8, to make gay marriage against the California constitution. This donation came to light in 2012, and made international press at the time.

    Techies can get away with believing the most awful things, as long as they stay locked away in their basement — but Eich was made CEO of Mozilla in 2014, and somehow the board thought the donation against gay marriage wouldn’t immediately become 100% of the story.

    One programmer, whose own marriage had been directly messed up by Proposition 8, said he couldn’t in good conscience keep working on Firefox-related projects — and this started a worldwide boycott of Mozilla and Firefox. Eich refused to walk back his donation in any manner — though he did promise not to actively seek to violate California discrimination law in the course of his work at Mozilla, so that’s nice — and quit a few weeks later.

    Eich went off to found Brave, a new web browser that promises to solve the Internet advertising problem using Basic Attention Tokens, a token that promises a decentralised future for paying publishers that is only slightly 100% centralised in all functional respects.

    Gilder uses Eich mostly to launch into a paean to Initial Coin Offerings — specifically, in their rôle as unregistered penny stock offerings. Gilder approves of ICOs bypassing regulation, and doesn’t even mention how the area was suffused with fraud, nor the scarcity of ICOs that delivered on any of their promises. The ICO market collapsed after multiple SEC actions against these blatant securities frauds.

    Gilder also approves of Brave’s promise to combat Google’s advertising monopoly, by, er, replacing Google’s ads with Brave’s own ads.

    Goodbye Digital

    Dan Berninger’s internet phone startup Hello Digital is, or was, an enterprise so insignificant it isn’t in the first twenty companies returned by a Google search on “hello digital”. Gilder loves it.

    Berninger’s startup idea involved end-to-end non-neutral precedence for Hello Digital’s data. And the US’s net neutrality rules apparently preclude this. Berninger sued the FCC to make it possible to set up high-precedence private clearways for Hello Digital’s data on the public Internet.

    This turns out to be Berninger’s suit against the FCC to protest “net neutrality” — on which the Supreme Court denied certiorari in December 2018.

    Somehow, Skype and many other applications managed enormously successful voice-over-internet a decade previously on a data-neutral Internet. But these other systems “fail to take advantage of the spontaneous convergence of interests on particular websites. They provide no additional sources of revenue for Web pages with independent content. And they fail to add the magic of high-definition voice.” Apparently, all of this requires proprietary clearways for such data on the public network? Huge if true.

    Gilder brings up 5G mobile Internet. I think it’s supposed to be in Google’s interests? Therefore it must be bad. Nothing blockchainy here, this chapter’s just “Google bad, regulation bad”.

    The Empire Strikes Back

    Old world big money guys — Jamie Dimon, Warren Buffett, Charlie Munger, Paul Krugman — say Bitcoin is trash. Gilder maintains that this is good news for Bitcoin.

    Blockchain fans and critics — and nobody else — will have seen Kai Stinchcombe’s blog post of December 2017, “Ten years in, nobody has come up with a use for blockchain.” Stinchcombe points out that “after years of tireless effort and billions of dollars invested, nobody has actually come up with a use for the blockchain — besides currency speculation and illegal transactions.” It’s a good post, and you should read it.

    Gilder spends an entire chapter on this blog post. Some guy who wrote a blog post is a mid-level boss in this book.

    Gilder concedes that Stinchcombe’s points are hard to argue with. But Stinchcome merely being, you know, right, is irrelevant — because, astounding future!

    Stinchcombe writes from the womb of the incumbent financial establishment, which has recently crippled world capitalism with a ten-year global recession.

    One day a bitcoiner will come up with an argument that isn’t “but what about those other guys” — but today is not that day.

    At Last, We Escape

    We’ve made it to the last chapter. Gilder summarises how great the blockchain future will be:

    The revolution in cryptography has caused a great unbundling of the roles of money, promising to reverse the doldrums of the Google Age, which has been an epoch of bundling together, aggregating, all the digital assets of the world.

    Gilder confidently asserts ongoing present-day processes that are not, here in tawdry reality, happening:

    Companies are abandoning hierarchy and pursuing heterarchy because, as the Tapscotts put it, ‘blockchain technology offers a credible and effective means not only of cutting out intermediaries, but also of radically lowering transaction costs, turning firms into networks, distributing economic power, and enabling both wealth creation and a more prosperous future.’

    If you read Don and Alex Tapscott’s Blockchain Revolution (Random House, 2016), you’ll see that they too fail to demonstrate any of these claims in the existing present rather than the astounding future. Instead, the Tapscotts spend several hundred pages talking about how great it’s all going to be potentially, and only note blockchain’s severe technical limitations in passing at the very end of the book.

    We finish with some stirring blockchain triumphalism:

    Most important, the crypto movement led by bitcoin has reasserted the principle of scarcity, unveiling the fallacy of the prodigal free goods and free money of the Google era. Made obsolete will be all the lavish Google prodigies given away and Google mines and minuses promoted as ads, as well as Google Minds fantasizing superminds in conscious machines.

    Bitcoin promoters routinely tout “scarcity” as a key advantage of their Internet magic beans — ignoring, as Gilder consistently does, that anyone can create a whole new magical Internet money by cut’n’paste, and they do. Austrian economics advocates had noted that issue ever since it started happening with altcoins in the early 2010s.

    The Google era is coming to an end because Google tries to cheat the constraints of economic scarcity and security by making its goods and services free. Google’s Free World is a way of brazenly defying the centrality of time in economics and reaching beyond the wallets of its customers directly to seize their time.

    The only ways in which the Google era has been shown to be “coming to an end” is that their technologies are reaching the tops of their S-curves. This absolutely counts as an end point as Gilder describes technological innovation, and he might even be right that Google’s era is ending — but his claimed reasons have just been asserted, and not at all shown.

    By reestablishing the connections between computation, finance, and AI on the inexorable metrics of time and space, the great unbundling of the blockchain movement can restore economic reality.

    The word “can” is doing all the work there. It was nine years at this book’s publication, and thirteen years now, and there’s a visible lack of progress on this front.

    Everything will apparently decentralise naturally, because at last it can:

    Disaggregated will be all the GAFAM (Google, Apple, Facebook, Amazon, Microsoft conglomerates) — the clouds of concentrated computing and commerce.

    The trouble with this claim is that the whole crypto and blockchain middleman infrastructure is full of monopolies, rentiers and central points of failure — because centralisation is always more economically efficient than decentralisation.

    We see recentralisation over and over. Bitcoin mining recentralised by 2014. Ethereum mining was always even more centralised than Bitcoin mining, and almost all practical use of Ethereum has long been dependent on ConsenSys’ proprietary Infura network. “Decentralisation” has always been a legal excuse to say “can’t sue me, bro,” and not any sort of operational reality.

    Gilder concludes:

    The final test is whether the new regime serves the human mind and consciousness. The measure of all artificial intelligence is the human mind. It is low-power, distributed globally, low-latency in proximity to its environment, inexorably bounded in time and space, and creative in the image of its creator.

    Gilder wants you to know that he really, really hates the idea of artificial intelligence, for religious reasons.

    Epilogue: The New System of the World

    Gilder tries virtual reality goggles and likes them: “Virtual reality is the opposite of artificial intelligence, which tries to enhance learning by machines. Virtual reality asserts the primacy of mind over matter. It is founded on the singularity of human minds rather than a spurious singularity of machines.”

    There’s a bit of murky restating of his theses: “The opposite of memoryless Markov chains is blockchains.” I’m unconvinced this sentence is any less meaningless with the entire book as context.

    And Another Thing!

    “Some Terms of Art and Information for Life after Google” at the end of the book isn’t a glossary — it’s a section for idiosyncratic assertions without justification that Gilder couldn’t fit in elsewhere, e.g.:

    Chaitin’s Law: Gregory Chaitin, inventor of algorithmic information theory, ordains that you cannot use static, eternal, perfect mathematics to model dynamic creative life. Determinist math traps the mathematician in a mechanical process that cannot yield innovation or surprise, learning or life. You need to transcend the Newtonian mathematics of physics and adopt post-modern mathematics — the mathematics that follows Gödel (1931) and Turing (1936), the mathematics of creativity.

    There doesn’t appear to be such a thing as “Chaitin’s Law” — all Google hits on the term are quotes of Gilder’s book.

    Gilder also uses this section for claims that only make sense if you already buy into the jargon of goldbug economics that failed out in the real world:

    Economic growth: Learning tested by falsifiability or possible bankruptcy. This understanding of economic growth follows from Karl Popper’s insight that a scientific proposition must be framed in terms that are falsifiable or refutable. Government guarantees prevent learning and thus thwart economic growth.

    Summary

    Gilder is sharp as a tack in interviews. I can only hope to be that sharp when I’m seventy-nine. But Life After Google fails in important ways — ways that Regnery bothering to bless the book with an editorial axe might have remedied. Gilder should have known better, in so many directions, and so should Regnery.

    Gilder keeps making technological and mathematical claims based directly on his religious beliefs. This does none of his other ideas any favours.

    Gilder is sincere. (Apart from that time he was busted lying about intelligent design not being intended to promote religion.) I think Gilder really does believe that Gödel’s incompleteness theorems and Shannon’s information theory, as further developed by Chaitin, mathematically prove that intelligence requires the hand of God. He just doesn’t show it, and nor has anyone else — particularly not any of the names he drops.

    This book will not inform you as to the future of the blockchain. It’s worse than typical ill-informed blockchain advocacy text, because Gilder’s track record means we expect more of him. Gilder misses key points he has no excuse for missing.

    The book may be of use in its rôle as some of what’s informing the technically incoherent blockchain dreams of billionaires. But it’s a slog.

    Those interested in blockchain — for or against — aren’t going to get anything useful from this book. Bitcoin advocates may see new avenues and memes for evangelism. Gilder fans appear disappointed so far.

    _____

    David Gerard is a writer, technologist, and leading critic of bitcoin and blockchain. He is the author of Attack of the 50-Foot Blockchain: Bitcoin, Blockchain, Ethereum and Smart Contracts (2017) and Libra Shrugged: How Facebook Tried to Take Over the Money (2020), and blogs at https://davidgerard.co.uk/blockchain/.

    Back to the essay

  • Rizvana Bradley — The Vicissitudes of Touch: Annotations on the Haptic

    Rizvana Bradley — The Vicissitudes of Touch: Annotations on the Haptic

    Rizvana Bradley

    The late queer theorist Eve Kosofsky Sedgwick is known for her tenacious commitment to the indeterminate possibilities that nondualism might offer sustained inquiries into minor aesthetics, politics, and performance. In the introduction to Touching Feeling: Affect, Pedagogy, Performativity, Sedgwick turns to touch and texture as particularly generative heuristic sites for opening the book’s avowed project, namely the exploration of “promising tools and techniques for nondualist thought and pedagogy.”[1] Moving through psychoanalysis, queer theory, and sexuality studies, the text probes entanglements of intimacy and emotion, desire and eroticism, that animate experience and draw social life into the myriad folds of material and nonlinguistic relations. As Lauren Berlant asserts of Sedgwick’s text, “the performativity of knowledge beyond speech – aesthetic, bodily, affective – is its real topic.”[2]

    One of Sedgwick’s most important and enduring legacies is a radically queer heuristic that endeavors to make theorizable the imperceptible and obscure relationships between affect, pedagogy, and performativity, without reproducing the limits and burdens of epistemology (even antiessentialist epistemology), with its “demand on essential truth.”[3] For Sedgwick, texture and touch offer potential instances of sidestepping or evading the foreclosures of structure and its attendant calcification of subject-object relations, a pivot towards antinormative pedagogies of reading and interpretation. Following Henry James, Sedgwick suggests that “to perceive texture is always, immediately, and de facto to be immersed in a field of active narrative hypothesizing, testing, and re-understanding of how physical properties act and are acted upon over time,” to become engaged in a series of speculative departures rather than analytical arrivals.[4] Similarly, Sedgwick finds in the sense of touch a perceptual experience that “makes nonsense out of any dualistic understanding of agency and passivity.”[5] Particularly relevant for our purposes is Sedgwick’s turn to the registers of difference between texture and texxture as a guide for thinking about forms of desire, perception, and interpretation that exceed normative modalities of belonging in, being with, and making sense of the world.

    Teasing out the implications of Renu Bora’s taxonomy of textural difference, Sedgwick tells us that

    Bora notes that ‘smoothness is both a type of texture and texture’s other.’ His essay makes a very useful distinction between two kinds, or senses, of texture, which he labels ‘texture’ with one x and ‘texxture’ with two x’s. Texxture is the kind of texture that is dense with offered information about how, substantively, historically, materially, it came into being. A brick or metal-work pot that still bears the scars and uneven sheen of its making would exemplify texxture in this sense. But there is also the texture – one x this time – that defiantly or even invisibly blocks or refuses such information; there is texture usually glossy if not positively tacky, that insists instead on the polarity between substance and surface, texture that signifies the willed erasure of its history.[6]

    Though one might be tempted to singularly assign to texture’s “manufactured or overhighlighted surface” the properties and pitfalls of “psychoanalytic and commodity fetishism,” in fact,

    the narrative-performative density of the other kind of texxture – its ineffaceable historicity – also becomes susceptible to a kind of fetish-value. An example of the latter might occur where the question is one of exotism, of the palpable and highly acquirable textural record of the cheap, precious work of many foreign hands in the light of many damaged foreign eyes. [7]

    Paradoxically, it is precisely the failure of texture to erase the internal historicity that would appear to be self-evidently registered on the surface of texxture, which allows Sedgwick to effectively grant the former an elusive depth, declaring that, “however high the gloss, there is no such thing as textural lack.”[8] Meanwhile, texxture’s presumably inescapable depth seems to recede across the surficial “scars and uneven sheen” that are read as the signatures of its making. For Sedgwick, one of the primary implications of these phenomenological variegations and perplexities is that texture, “in short, comprises an array of perceptual data that includes repetition, but whose degree of organization hovers just below the level of shape or structure…[the] not-yet-differentiated quick from which the performative emerges.”[9] In this way,

    texture seems like a promising level of attention for shifting the emphasis of some interdisciplinary conversations away from the recent fixation on epistemology…by asking new questions about phenomenology and affect, [for what]…texture and affect, touching and feeling…have in common is that…both are irreducibly phenomenological.[10]

    On the one hand, Sedgwick’s turn to texture divulges extra-linguistic affiliations that performatively surprise, facilitating an erotic retrieval of subjective and aesthetic non-mastery that continues to resonate with ongoing critiques of the aesthetic. And yet, while Sedgwick’s assertions about affectivity and touch facilitate an opening for a theoretical re-evaluation of notions of agency, passivity, and self-perception, they are also deeply problematic. For what does phenomenology, which takes the body as our “point of view in the world,”[11] have to say to those who, following Frantz Fanon, have never had a body, but rather its theft, those who have only ever been granted the dissimulation of a body, “sprawled out, distorted, recolored, clad in mourning[?]”[12] What of those whose skin is constantly resurfaced as depthless texxture, a texxture whose surficial inscriptions are read as proxies for the historicity that the over-glossed surface would seek to expunge? In other words, Sedgwick’s ruminations disclose an undeclared, but nevertheless central, conceit that has significant implications for thinking about the bearing of form on ontology: namely that, for Sedgwick, the texturized valences of touch are implicated in, rather than a violent displacement from, the symbolic economy of the human.

    In theorizing touch, might we trouble the presumption that aesthetics, subjectivity, and desire – or more precisely their entwinement – are necessarily embedded within the normative regime of the human? I am interested, in other words, in how Sedgwick’s observations on touch might occasion, even as they displace, a different set of interrelated questions regarding ontological mattering and the fashioning of aesthetic subjectivity. Calvin Warren’s assertion that “[q]ueer theory’s ‘closeted humanism’ reconstitutes the ‘human’ even as it attempts to challenge and, at times, erase it,” demands we reconsider any theory (about the queerness) of touch that has yet to grapple with its universalist underpinnings. It would seem that queer theory, even one as vigorously attuned to the textured rediscovery of minor forms as Sedgwick’s, nevertheless conceives desire, sexuality, and gender as co-extensive with the erotic architecture of the (queerly differentiated/differentiating) human subject. Suffering may be aestheticized, but it is not reckoned with as an ontological imposition – as a “grammar,” to use Frank B. Wilderson’s language[13] – out of which an aesthesis necessarily emerges.

    Insofar as texxture bears the inscription of its material conditions of possibility, it should direct us toward a genealogy of substance at odds with surface appearance. At stake is what film scholar Laura Marks theorizes under the rubric of the haptic[14] – the tactile, kinesthetic, and proprioceptive dimensions of touch, the irreducibly haptic valences of touch that pressure prevailing distinctions between substance and surface, inside and outside, body and flesh. A question at once animated and omitted by queer theory’s inquiries into touch: how to theorize texxture with regard to a history of bodily wounding occasioned by touch, when it is texxture that is seized upon by the various proxies for touch that willingly or inadvertently redouble racial fantasies of violation? Thinking the haptic irreducibility of the aesthetic requires constant re-attunement to the violence touch occasions and to the violations which occasion touch. If touch is ultimately inextricable from the aesthetic economy of worldly humanity, then, apropos Saidiya Hartman, we are compelled to think about the violence that resides in our habits of worlding.[15]

    Without even addressing the massive implications that attend the frequent conflation of being with body, what cleaves to being within the context of critical theory’s alternately residual or unapologetic phenomenology, is a corporeal subject whose situatedness within and for the world is not only predetermined, but whose predetermination is taken for granted as the condition of possibility for sentient touch. Such unwitting Calvinism, which would seem to take Merleau-Ponty at his word when he declares that “every relation with being is simultaneously a taking and being taken,”[16] inevitably reproduces and rubs up against a foundational schism: being taken, where the traces of an inflective doubling disclose a morphological distinction at the level of species-being.[17] Just as the tectonics of touch – their quakes and strains, fractures and fault lines, accretions and exfoliations – can hardly be taken for simply surface phenomena, neither can they be assumed to unfold upon a universal plane of experience, or to obtain between essentially analogous subjects within a common field of relation (a fact betrayed by the nominative excess which threatens to spill from the very word, “field”). Touch cannot be understood apart from the irreducibly racial valences and demarcations of corporeality in the wake of transatlantic slavery.

    In her landmark essay, “Mama’s Baby, Papa’s Maybe: An American Grammar Book,” Hortense Spillers theorizes one of the central cleavages of the modern world, wrought and sundered in the cataclysmic passages of racial slavery: that of body and flesh, which Spillers takes as the foremost distinction “between captive and liberated subjects-positions”:

    before the “body” there is the “flesh,” that zero degree of social conceptualization that does not escape concealment under the brush of discourse or the reflexes of iconography. Even though the European hegemonies stole bodies – some of them female – out of West African communities in concert with the African “middleman,” we regard this human and social irreparability as high crimes against the flesh, as the person of African females and males registered the wounding. If we think of the “flesh” as a primary narrative, then we mean its seared, divided, ripped-apartness, riveted to the ship’s hole, fallen, or “escaped” overboard.[18]

    Flesh is before the body in a dual sense. On the one hand, as Alexander Weheliye stresses, flesh is “a temporal and conceptual antecedent to the body[.]”[19] The body, which may be taken to stand for “legal personhood qua self-possession,”[20] is violently produced through the “high crimes against the flesh.” On the other hand, flesh is before the body in that it is everywhere subject to and at the disposal of the body. The body is cleaved from flesh, while flesh is serially cleaved by the body. As Fred Moten suggests, the body only emerges through the disciplining of flesh.[21]

    This diametric arrangement of corporeal exaltation and abjection is registered, as Spillers emphasizes, in “the tortures and instruments of captivity,” those innumerable, unspeakable brutalities by which flesh is irrevocably marked:

    The anatomical specifications of rupture, of altered human tissue, take on the objective description of laboratory prose – eyes beaten out, arms, backs, skulls branded, a left jaw, a right ankle, punctured; teeth missing, as the calculated work of iron, whips, chains, knives, the canine patrol; the bullet.[22]

    The unspeakability of such woundings, however, is not merely a function of their terror and depravity, but rather a consequence of the ways flesh has been made to bear the conditions of im/possibility of and for a semiotics which takes itself to be the very foundation of language, at least in its modern dissimulations.[23] In Moten’s illumination, “[t]he value of the sign, its necessary relation to the possibility of (a universal science of and a universal) language, is only given in the absence or supercession of, or the abstraction from, sounded speech— its essential materiality is rendered ancillary by the crossing of an immaterial border or by a differentializing inscription.”[24] Thus, when Spillers writes that “[t]hese undecipherable markings on the captive body render a kind of hieroglyphics of the flesh whose severe disjunctures come to be hidden to the cultural by seeing skin color[,]”[25] we may surmise that what Frantz Fanon termed “epidermalization” – the process by which a “historico-racial schema” is violently imposed upon the skin, that which, for the Black, forecloses the very possibility of assuming a body (to borrow Gayle Salamon’s turn of phrase) – is, among other things, a mechanism of semiotic concealment.[26] (R.A. Judy refers to it as “something like [flesh]…being parenthesized.”)[27] What is hidden and rehidden, the open secret alternately buried within and exposed upon the skin, is not merely a system of corporeal apartheid, but moreover what Spillers identifies as the vestibularity of flesh to culture. “This body whose flesh carries the female and the male to the frontiers of survival bears in person the marks of a cultural text whose inside has been turned outside.”[28]

    Speaking at a conference day I curated for the Stedelijk Museum of Art and Studium Generale Rietveld Academy in 2018, entitled “There’s a Tear in the World: Touch After Finitude,” Spillers revisited her classic essay, drawing out its implications for thinking through questions of touch and hapticality.[29] For Spillers, touch “might be understood as the gateway to the most intimate experience and exchange of mutuality between subjects, or taken as the fundamental element of the absence of self-ownership…it defines at once, in the latter instance, the most terrifying personal and ontological feature of slavery’s regimes across the long ages.”[30] To meaningfully reckon with “the contradictory valences of the haptic” is to “attempt an entry into this formidable paradox, which unfolds a troubled intersubjective legacy – and, perhaps, troubled to the extent that one of these valences of touch is not walled off from the other, but haunts it, shadows it, as its own twin possibility.”[31] Spillers follows with an unavoidable question: “did slavery across the Americas rupture ties of kinship and filiation so completely that the eighteenth century demolishes what Constance Classen, in The Deepest Sense: A Cultural History of Touch, calls a ‘tactile cosmology’?” If so, then the dimensions of touch which are understood as “curative, healing, erotic, [or] restorative” cannot be held apart from the myriad “violation[s] of the boundaries of the ego in the enslaved, that were not yet accorded egoistic status, or, in brief, subjecthood, subjectivity.”[32]

    Touch, then, evokes the vicious, desperate attempts of the white, the settler, to feign the ontic verity, stability, and immutability of an irreducibly racial subject-object (non-)relation through what Frank Wilderson would call “gratuitous violence”[33] as much as it does the corporeal life of intra- and intersubjective relationality and encounter. If even critical discourse on these latter, corporeal happenings tends to assume the facticity of the juridically sanctioned pretense to self-possession Spillers calls “bodiedness,” then “flesh describes an alien entity,” a corporeal formation fundamentally unable to “ward off another’s touch…[who] may be invaded or entered or penetrated, so to speak, by coercive power” in any given place or moment. It is, in other words, precisely “the captive body’s susceptibility to being touched [which] places this body on the side of the flesh,”[34] a susceptibility which is not principally historical, but ontological, even as flesh constitutes, to borrow Moten’s phrasing, “a general and generative resistance to what ontology can think[.]”[35] Spillers brings us to the very threshold of feeling, where to be cast on the side of the flesh is to inhabit the cut between existence and ontology. Black life is being-touched.

    How might we bring such knowledge to bear upon our understanding of different aesthetic practices, forms, and traditions? What if Theodor Adorno’s conception of the “shudder” experienced by the subject in his ephemeral encounter with a “genuine relation to art,” that “involuntary comportment” which is “a memento of the liquidation of the I,”[36] must be understood as the corporeal expression of a subject whose conditions of existence sustain the fantasy of being-untouched? How might such an interpretation serve not simply to foreground an indictment, but also aspire to linger with the political, ethical, and analytic questions that emerge from the entanglements of hapticality, aesthetics, and violence, questions which are unavoidable for those given to blackness? “The hold’s terrible gift,” Moten and Harney maintain, “was to gather dispossessed feelings in common, to create a new feel in the undercommons.”[37] And, as Moten has subsequently reminded us, violence cannot be excised from the materiality of this terrible gift, which is none other than black art:

    Black art neither sutures nor is sutured to trauma. There’s no remembering, no healing. There is, rather, a perpetual cutting, a constancy of expansive and enfolding rupture and wound, a rewind that tends to exhaust the metaphysics upon which the idea of redress is grounded.[38]

    Black art promises neither redemption nor emancipation. The “transcendent power” that Peter de Bolla, for example, finds gloriously manifest by an artwork such as Michelangelo’s Rondanini Pietà, that encounter with a “timeless…elemental beauty” which constitutes “one of the basic building blocks of our shared culture, our common humanity,”[39] is a fabrication of a structure of aesthetic experience that is wholly unavailable to the black, who, after all, has never been human. If Immanuel Kant, as the preeminent architect of modern European aesthetic philosophy, understood art to emerge precisely in its separation from nature, as “a work of man,”[40]then it is clear his transcendental aesthetic is not the province of black art. For, as Denise Ferreira da Silva argues, modernity’s “arsenal of raciality” places the black before the “scene of nature,” as “as affectable things…subjected to the determination of both the ‘laws of nature’ and other coexisting things.”[41] Black art, in all its earthly perversity, emerges in the absence and refusal of the capacity to claim difference as separation, as that which instead touches and is touched by the beauty and terrors of entanglement, “a composition which is always already a recomposition and a decomposition of prior and posterior compositions.”[42] Whatever its (anti-)formal qualities, black art proceeds from enfleshment, from the immanent brutalities and minor experiments of the haptic, the cuts and woundings of which it cannot help but bear. Black art materializes in and as a metaphysical impossibility, as that which, in Moten’s words, “might pierce the distinction between the biological and the symbolic…as the continual disruption of the very idea of (symbolic) value, which moves by way of the reduction of substance…[as] the reduction to substance (body to flesh) is inseparable from the reduction of substance.”[43] Hapticality is a way of naming an analytics of touch that cannot be, let alone appear, within the onto-epistemological confines of the (moribund) world, a gesture with and towards the abyssal revolution and devolution of the sensorium to which black people have already been subject, an enfleshment of the “difference without separability”[44] that has been and will be the condition of possibility for “life in the ruins.”[45]

    _____

    Rizvana Bradley is Assistant Professor of Film and Media at UC Berkeley. Her research and teaching focuses on the study of contemporary art and aesthetics at the intersections of film, literature, poetry, contemporary art and performance. Her scholarly approach to artistic practices in global black cultural production expands and develops frameworks for thinking across these contexts, specifically in relation to contemporary aesthetic theory.  She has published articles in TDR: The Drama Review, Discourse: Journal for Theoretical Studies in Media and Culture, Rhizomes: Cultural Studies in Emerging Knowledge, Black Camera: An International Film Journal, and Film Quarterly, and is currently working on two book projects.

    Back to the essay

    _____

    Notes

    [1] Eve Kosofsky Sedgwick, Touching Feeling: Affect, Pedagogy, Performativity (Durham: Duke University Press, 2003), 1.

    [2] Ibid., back cover.

    [3] Ibid., 6.

    [4] Ibid., 13.

    [5] Ibid., 14.

    [6] Ibid., 14-15.

    [7] Ibid., 15.

    [8] Ibid.

    [9] Ibid., 16, 17.

    [10] Ibid., 21.

    [11] Maurice Merleau-Ponty, The Phenomenology of Perception (New York: Routledge, 2012), 73.

    [12] Frantz Fanon, Black Skins, White Masks (London: Pluto Press, 1986).

    [13] See, in particular, Frank B. Wilderson III, Red, White, and Black: Cinema and the Structure of U.S. Antagonisms (Durham: Duke University Press, 2010).

    [14] Laura U. Marks, The Skin of the Film: Intercultural Cinema, Embodiment, and the Senses (Durham: Duke University Press, 2000). My reading of Marks is in turn inestimably shaped by Fred Moten and Stefano Harney’s elaboration of hapticality in The Undercommons: Fugitive Planning and Black Study (New York; Port Watson: Minor Compositions, 2013), 97-99; see also the special issue I guest edited for Women and Performance: A Journal of Feminist Theory, “The Haptic: Textures of Performance,” vol. 24, no. 2-3 (2014).

    [15] This was a formulation made by Hartman in our conversation during my curated event for the Serpentine Galleries, London. “Hapticality, Waywardness, and the Practice of Entanglement: A Study Day with Saidiya Hartman,” 8 July, 2017.

    [16] Maurice Merleau-Ponty, The Visible and the Invisible (Chicago: Northwestern University Press, 1968), 266.

    [17] Cf. Karl Marx, The Economic and Philosophic Manuscripts of 1844, ed. Dirk J. Struik (New York: International Publishers, 1964).

    [18] Hortense Spillers, “Mama’s Baby, Papa’s Maybe: An American Grammar Book,” Diacritics, Volume 17, Number 2 (Summer 1987), 64-81, 67.

    [19] Alexander G. Weheliye, Habeas Viscus: Racializing Assemblages, Biopolitics, and Black Feminist Theories of the Human (Durham: Duke University Press, 2014), 39. For a contrasting interpretation, see R.A. Judy’s brilliant, recently published, Sentient Flesh: Thinking in Disorder, Poiēsis in Black (Durham: Duke University Press, 2020), xvi, 210: “flesh is with and not before the body and person, and the body and person are with and not before or even after the flesh.”

    [20] Weheliye (2014), 39.

    [21] Fred Moten, “Of Human Flesh: An Interview with R.A. Judy” (Part Two), b2o: An Online Journal (6 May 2020).

    [22] Spillers (1987), 67.

    [23] R.A. Judy takes up these questions surrounding flesh and what he terms “para-semiosis,” or “the dynamic of differentiation operating in multiple multiplicities of semiosis that converge without synthesis[,]” with characteristic erudition in Sentient Flesh (2020), xiiv.

    [24] Fred Moten, In the Break: The Aesthetics of the Black Radical Imagination (Minneapolis: University of Minnesota Press, 2003), 13.

    [25] Spillers (1987), 67.

    [26] Fanon (1986). Gayle Solamon, Assuming a Body: Transgender and the Rhetorics of Masculinity (New York: Columbia University Press, 2010).

    [27] Judy (2020), 207.

    [28] Spillers (1987), 67. For one of Fred Moten’s more pointed engagements with this formulation from Spillers, see “The Touring Machine (Flesh Thought Inside Out),” in Stolen Life (consent not to be a single being) (Durham: Duke University Press, 2018), 161-182.

    [29] Hortense Spillers, “To the Bone: Some Speculations on Touch,” There’s a Tear in the World: Touch After Finitude, Stedelijk Museum of Art and Studium Generale Rietveld Academy, 23 March 2018, keynote address.

    [30] Ibid.

    [31] Ibid. Emphasis added.

    [32] Ibid.

    [33] Wilderson, 2010.

    [34] Spillers (2018). As these quotations are drawn from Spillers’s talk rather than a published text, the emphasis placed on the word being is inferred from her spoken intonation.

    [35] Moten (2018), 176.

    [36] Theodor Adorno, Aesthetic Theory (London: Bloomsbury Academic, 1997), 333.

    [37] Moten and Harney (2013), 97.

    [38] Fred Moten, Black and Blur (consent not to be a single being), (Durham: Duke University Press, 2017), ix.

    [39] Peter de Bolla, Art Matters (Cambridge: Harvard University Press, 2001), 28.

    [40] Immanuel Kant, Critique of Judgement (London: Macmillan and Co., 1914), 184.

    [41] Denise Ferreira da Silva, “The Scene of Nature,” in Justin Desautels-Stein & Christopher Tomlins (eds.), Searching for Contemporary Legal Thought (Cambridge: Cambridge University Press, 2017), 275-289, 276. For an important study of modernity’s “racial regime of aesthetics,” see David Lloyd, Under Representation: The Racial Regime of Aesthetics (New York: Fordham University Press, 2019).

    [42] Denise Ferreira da Silva, “In the Raw,” e-flux, Journal #93 (September 2018).

    [43] Fred Moten (2018), 174.

    [44] Denise Ferreira da Silva, “Difference without Separability,” Catalogue of the 32nd Bienal de São Paulo – INCERTEZA VIVA (2016), 57-65.

    [45] Cf. Anna Lowenhaupt Tsing, The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins (Princeton: Princeton University Press, 2015).

  • Julia DeCook — How Deep Does the Rabbit Hole Go? The “Wonderland” of r/TheRedPill and Its Ties to White Supremacy

    Julia DeCook — How Deep Does the Rabbit Hole Go? The “Wonderland” of r/TheRedPill and Its Ties to White Supremacy

    Julia DeCook

    This essay has been peer-reviewed by “The New Extremism” special issue editors (Adrienne Massanari and David Golumbia), and the b2o: An Online Journal editorial board.

    You take the blue pill, the story ends. You wake up in your bed and believe whatever you want to believe. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.

    —Morpheus, The Matrix (Wachowski and Wachowski 1999)

    In the 1999 film The Matrix, Morpheus presents the protagonist, Neo, with the option of taking one of two pills: taking the blue pill would close off Neo’s burgeoning consciousness of the constructed nature of his life in the Matrix; taking the red pill would allow Neo to remain in Wonderland, meaning he would remain conscious of the real world around him. In The Matrix, human beings who have not taken the red pill exist in a type of virtual reality. Thus, to “take the red pill” means to be awakened—to become conscious—to see the world for what it truly is.

    The phrase entered popular vernacular in ways that the transgender Wachowski siblings undoubtedly never intended. In the context of The Matrix, taking the Red Pill means awakening to the oppressive mechanisms of control. But the phrase has been taken up by the far right to mean waking up to the oppressive mechanisms of feminism, progressive politics, and multiculturalism (Read 2019). Notably, on the popular content aggregator and forum website Reddit, the prominent men’s rights/pick up artist subreddit r/TheRedPill takes its name from this famous scene. However, instead of giving the user insight to see the world as one where robots have enslaved humanity, the Reddit “red pill” awakens men to the realization that they have been enslaved by women and feminism (Baker  2017; Ging 2019; Van Valkenburgh 2018).

    This rhetoric may feel familiar to those who have been following the rhetoric of the alt right, who often point to the need to “wake people up” to a constructed reality where white people— particularly white men—have been oppressed by feminism and multiculturalism. Discussions surrounding the Manosphere, (a loosely connected online network of men’s rights activists, pick up artists, Incels [so-called Involuntary Celibates], and other male-focused social movements) in both popular media as well as academic scholarship point to the ways the Manosphere functions as a gateway ideology for the alt right (Futrelle 2017b). Often, the broad connection that links these two groups together is misogyny and anti-feminist sentiments that they use as a way to ground their group identity and the political goals of the various factions within them. These affective dimensions that appeal to the frustration and anger of men who flock to these groups then create a new cultural practice (Ahmed, 2004). Although what these men pride themselves is their ability to think logically about the “reality of the sexual marketplace,” what we see emerging is a stronger appeal to emotion that then shapes their relationship with the group itself, and is performed through misogyny.

    The ways misogyny is performed on r/TheRedPill is under the guise of providing a “positive identity for men” by highlighting mechanisms by which Manosphere discourse and ideology can set up a foundation for further radicalization into more extremist thought. The ways the group strategizes in facilitating this radicalization as well as how it indoctrinates its members warrants further exploration, particularly to understand how such processes may occur. Particularly, the ways that the Manosphere’s ideology may set up a foundation for further indoctrination is important to highlight the radicalization process, since the Manosphere’s “pill” may be easier to swallow at first than outright white supremacy (Futrelle 2017b). Since the Manosphere and its many groups lure members into their communities by playing on their frustrations regarding sexual and romantic relationships, the ways that this radicalization occurs may be subtle at first but become more pronounced as time goes on.

    r/TheRedPill is both a prominent community in the Manosphere as well as a sizable Reddit community on its own. With over 400,000 members scattered across a variety of affiliated subreddits (i.e., r/RedPillWomen and r/RedPillParenting), the subreddit is not just a notable case study for its sheer size and popularity within the Manosphere but also for the ways the community has expanded its boundaries to appeal to a larger group of people—including women. By positioning itself as a social movement, the radicalization happening within the Manosphere first attracts men by appealing to their sexual or romantic frustrations, and then promises to give them the tools to alleviate this frustration and become “better men” for it. Unlike MGTOW (Men Going Their Own Way), whose members voluntarily abstain from romantic or sexual relationships to reclaim their “power” (Futrelle 2017a), and unlike Men’s Rights Activists (MRAs), who do not focus on pursuing sexual and romantic relationships, r/TheRedPill packages itself as a group that helps men successfully engage in sexual or romantic relationships with the added benefit of reclaiming one’s manhood.

    To “Red Pillers” (what r/TheRedPill members call themselves and are referred to as outside of the community), feminism and society in general promote “sexual strategies” that favor women, thus giving women power in relationships, whereas The Red Pill community teaches men sexual strategies to take back the power in sexual or romantic relationships. Focusing on only heterosexual relationships, to “Red Pill” in this context means to invoke heteronormative gender roles that benefit the man in the relationship and subjugate the woman, a dynamic achieved by becoming what they call an “Alpha Male.” On the surface, r/TheRedPill is mostly aligned with the Pick-Up Artist (PUA) community, which teaches men strategies to seduce women, but cultivates a more intense focus on men’s rights activism.

    Importantly, men who adhere to the teachings promulgated by r/TheRedPill view it as much more than sexual strategy—they view it as an identity, a community, and an ideology in which they base their realities upon. Recently, and particularly after the 2016 election of Donald Trump as President of the United States, studies have emerged in both academia and within journalistic sources that attempt to lump together MRAs with Red Pillers and Incels as similar groups that belong in the Manosphere (Ging 2019). However, it is critical to understand that they are different and distinct from one another within the larger Manosphere ecosystem, particularly in terms of how they define themselves. Yet the common thread running through these communities that connects them to the larger alt right movement is misogyny. Misogyny, and the rejection of feminism, which many men in these groups view as a “cancer” inflicted upon “Western civilization,” are the glue that keep these groups within the same extremist networks.

    “How Women Destroy Western Civilization”

    The discourse in the forum focusing on the ways “Western civilization is doomed,” especially in so far as feminism and/or women can be blamed for it, is perhaps one of the clearest indications of the links between the Manosphere and the alt right. It is this misogyny that helps bind together these affective networks of rage (Ahmed, 2004), which drives the movements to attempt to subvert and replace a perceived dominant culture they feel is oppressive to [white] men. Although there are many Red Pillers who explicitly reject the association of the group with white supremacy, for there are indeed non-white Red Pillers, the rhetoric that both movements espouse is constructed based on three central claims: 1. That Western Civilization has been ruined by feminism; 2. That men are oppressed, and only by fixing this “imbalance” will Western Civilization be saved; and 3. That women who reject feminism and instead embrace “traditional” roles as wives and mothers, subservient to their husbands, are happier. Accordingly, men in the r/TheRedPill community do not necessarily reject women who are not virgins (unlike Incels, who insist on the virginity of women that they aspire to be with), but do believe that women are morally, intellectually, and physically inferior to men, thus providing the basis of the argument for why feminism has violated the “natural order” of things by giving women power (Manne 2017).

    The violation of a “natural order” based in biological determinism in regard to race and sex is a core argument used in far-right circles, including the Manosphere, to justify their beliefs. And although they share many similarities in regard to the superiority of men over women, grouping the Manosphere and the alt right under the same umbrella is insufficient to understand the crux of their ideologies and the arguments they use to support them. The Manosphere often invokes a nostalgic remembrance of a past before feminism “tainted” women, just as white supremacist rhetoric in other parts of the alt-right also invokes this kind of nostalgic remembrance of a past that was white and patriarchal. However, in terms of how directly connected the Manosphere is to white supremacy, one piece of “literature” that r/TheRedPill uses to support their ideological beliefs about women and “hypergamy” comes directly from The Occidental Quarterly, a known white nationalist/white supremacist academic journal funded by the Charles Martel society (Southern Poverty Law Center n.d.). The Occidental Quarterly helps to blast open a gateway from r/TheRedPill to white supremacy and/or white nationalism. What r/TheRedPill and its affiliated subreddits and websites has demonstrated through publications like these is that the rabbit hole goes deeper than sexual strategy.

    Hypergamy, in particular, is a concept that highlights the ways the misogynistic discourse of the Manosphere and the white supremacist movement (in particular, the alt right) are connected. Devlin, the author of the piece, begins an article by stating that “white birthrates worldwide have suffered a catastrophic decline in recent decades,” (see figure 1), and goes on to explain why hypergamy is the reason why. Specifically, hypergamy is defined as a sexual and romantic drive to be with the “Alpha Male,” regardless of current relationship status. In other words, women will instinctively seek out the most attractive, successful, or powerful man in a group to have sexual or romantic relations with, and this “hypergamy” drives women to only “mate at the top.” Devlin goes on to say that the sexual revolution of the 1960s shifted culture to be a “female sexual utopia,” and that this brought upon a new cultural norm where women had sexual rights, leading to the downfall of “white birthrates” and “Western civilization.” In sum, the article states that it is not only hypergamy that is responsible for this downfall, but that women having sexual and reproductive freedom is the cause of all of the modern day white man’s woes—sexually, romantically, economically, and culturally. Pointing to all of these collapses of a patriarchal, white, masculine world as the reason for the discontent of “Western civilization”, the concept of hypergamy is easily transportable across these extremist groups and easily embraced.

    The first paragraphs of Sexual Utopia in Power
    Figure 1. The first paragraphs of Sexual Utopia in Power

    The reclamation of power is the fundamental motivation that drives these communities. This article, as well as many of the other readings that serve as the foundation of r/TheRedPill and Manosphere thought, are about reclaiming masculinity, reclaiming power, and reclaiming truth and reality in general. They not only give the men who flock to these communities an answer; they also completely disassemble the world the person knew before (thus the phrase “being Red Pilled”). The postmodern era is most notably significant for the collapse of the “grand narratives” that held societies together, and in particular in Western contexts these grand narratives were based in hegemonic masculinity, patriarchy, Christian religion, and whiteness. The ideologies of r/TheRedPill and the Manosphere promise a return to this grand narrative to ground one’s reality. This collapsing—and ultimate rebuilding—of a grand narrative and purpose that privileges male power over all else, then, helps develop a mind to accept more extremist thoughts and to act on them. Not unlike the tactics used by cults, who often exploit people who are seeking meaning, The Manosphere and the alt right provide meaning in the form of misogyny and white supremacy, creating an “affective fabric” that binds them together (Kuntsman, 2012).

    It is worth mentioning that the material consequences of the extremist ideologies of the Manosphere have often resulted in mass violence. Elliott Rodger, the Isla Vista shooter, was a member of PUA communities online (McGuire 2014) and is often venerated in communities like r/Incels, where they refer to him as “Saint Elliott.” James Jackson stabbed an elderly black man to death in New York with a sword and was also a member of MGTOW. Indeed, MGTOW is the more extreme faction of the Manosphere and is often not concerned with the advancement of men’s rights. It thus lends itself easily to other extremist beliefs.

    Along with r/Incels, MGTOW may be the most severe and extreme of all of the groups of the Manosphere. This does not, however, mean that r/TheRedPill and other Manosphere groups are not extreme or severe in their misogyny, but rather that their packaging of their misogynistic beliefs may be easier to swallow at first and lead men who flock to their groups down the rabbit hole even farther. By positioning themselves as advocating for the interests of men, and as groups that foster “positive identities” for them, they are able to recruit members who feel as though they are lost and without community—providing them with a sense of belonging and a group identity to subscribe to gives these groups their long-term sustainability (Hogg & Williams 2000). The acknowledgement of the ideologies of the Manosphere and its connections to the alt right has been established; however, understanding of how each group within the Manosphere recruits and indoctrinates its members will lead to further insight as to how they ensure their sustained existence in and outside of this ideological web.

    Although there are distinctive differences among the groups in the Manosphere in terms of the levels of violence they advocate for, and what their activism and membership focus on, the common underlying thread among them is rage toward modern society and women. These differences, however, are important to understand in order to identify what draws men (and even women) to these groups. In particular, it is crucial to comprehend these differences to better strategize around the prevention of further radicalization. Thus, the underlying base ideology that fuels these movements, connects them to the alt right, and results in mass violence is one that warrants further investigation, particularly regarding the role of platforms in connecting them all together through algorithmically generated recommendations and the ease of navigating the digital communities that make them home (Massanari 2015; Noble 2018).

    Rather than aimlessly wandering the digital wilderness searching for meaning, meaning is being given to them through these Manosphere groups who exploit the frustrations of men who desire romantic and sexual relationships. But these frustrations are manifestations of unfulfilled desires, and these communities are where these desires and frustrations are validated and strengthened. And as we have seen too often with the rise of hate crimes and mass murders, these violent desires result in violent ends.

    _____

    Julia R. DeCook is an Assistant Professor in the School of Communication at Loyola University Chicago. She is currently working on publishing her dissertation which examined how various extremist groups responded to censorship and bans to understand how digital infrastructure sustains these movements. She is also a fellow with the newly formed Institute for Research on Male Supremacism.

    Back to the essay

    _____

    Works Cited

  • Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    By Audrey Watters

    ~

    On July 14, Facebook released its latest “diversity report,” claiming that it has “shown progress” in hiring a more diverse staff. Roughly 90% of its US employees are white or Asian; 83% of those in technical positions at the company are men. (That’s about a 1% improvement from last year’s stats.) Black people still make up just 2% of the workforce at Facebook, and 1% of the technical staff. Those are the same percentages as 2015, when Facebook boasted that it had hired 7 Black people. “Progress.”

    In this year’s report, Facebook blamed the public education system its inability to hire more people of color. I mean, whose fault could it be?! Surely not Facebook’s! To address its diversity problems, Facebook said it would give $15 million to Code.org in order to expand CS education, news that was dutifully reported by the ed-tech press without any skepticism about Facebook’s claims about its hiring practices or about the availability of diverse tech talent.

    The “pipeline” problem, writes Dare Obasanjo, is a “big lie.” “The reality is that tech companies shape the ethnic make up of their employees based on what schools & cities they choose to hire from and where they locate engineering offices.” There is diverse technical talent, ready to be hired; the tech sector, blinded by white, male privilege, does not recognize it, does not see it. See the hashtag #FBNoExcuses which features more smart POC in tech than work at Facebook and Twitter combined, I bet.

    Facebook’s decision to “blame schools” is pretty familiar schtick by now, I suppose, but it’s still fairly noteworthy coming from a company whose founder and CEO is increasingly active in ed-tech investing. More broadly, Silicon Valley continues to try to shape the future of education – mostly by defining that future as an “engineering” or “platform” problem and then selling schools and parents and students a product in return. As the tech industry utterly fails to address diversity within its own ranks, what can we expect from its vision for ed-tech?!

    My fear: ed-tech will ignore inequalities. Ed-tech will expand inequalities. Ed-tech will, as Edsurge demonstrated this week, simply co-opt the words of people of color in order to continue to sell its products to schools. (José Vilson has more to say about this particular appropriation in this week’s #educolor newsletter.)

    And/or: ed-tech will, as I argued this week in the keynote I delivered at the Digital Pedagogy Institute in PEI, confuse consumption with “innovation.” “Gotta catch ’em all” may be the perfect slogan for consumer capitalism; but it’s hardly a mantra I’m comfortable chanting to push for education transformation. You cannot buy your way to progress.

    All of the “Pokémon GO will revolutionize education” claims have made me incredibly angry, even though it’s a claim that’s made about every single new product that ed-tech’s early adopters find exciting (and clickbait-worthy). I realize there are many folks who seem to find a great deal of enjoyment in the mobile game. Hoorah. But there are some significant issues with the game’s security, privacy, its Terms of Service, its business model, and its crowd-sourced data model – a data model that reflects the demographics of those who played an early version of the game and one that means that there are far fewer “pokestops” in Black neighborhoods. All this matters for Pokémon GO; all this matters for ed-tech.

    Pokémon GO.
    Pokémon GO

    Pokémon GO is just the latest example of digital redlining, re-inscribing racist material policies and practices into new, digital spaces. So when ed-tech leaders suggest that we shouldn’t criticize Pokémon GO, I despair. I really do. Who is served by being silent!? Who is served by enforced enthusiasm? How does ed-tech, which has its own problems with diversity, serve to re-inscribe racist policies and practices because its loudest proponents have little interest in examining their own privileges, unless, as José points out, it gets them clicks?

    Sigh.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    Michelle Moravec — The Never-ending Night of Wikipedia’s Notable Woman Problem

    By Michelle Moravec
    ~

    Author’s note: this is the written portion of a talk given at St. Joseph University’s Art + Feminism Wikipedia editathon, February 27, 2016. Thanks to Rachael Sullivan for the invite and  Rosalba Ugliuzza for Wikipedia data culling!

    Millions of the sex whose names were never known beyond the circles of their own home influences have been as worthy of commendation as those here commemorated. Stars are never seen either through the dense cloud or bright sunshine; but when daylight is withdrawn from a clear sky they tremble forth
    — Sarah Josepha Hale, Woman’s Record (1853)

    and others was a womanAs this poetic quote by Sarah Josepha Hale, nineteenth-century author and influential editor reminds us, context is everything.   The challenge, if we wish to write women back into history via Wikipedia, is to figure out how to shift the frame of references so that our stars can shine, since the problem of who precisely is “worthy of commemoration” or in Wikipedia language, who is deemed notable, so often seems to exclude women.

    As as Shannon Mattern asked at last year’s Art + Feminism Wikipedia edit-a-thon, “Could Wikipedia embody some alternative to the ‘Great Man Theory’ of how the world works?” Literary scholar Alison Booth, in How To Make It as a Woman, notes that the first book in praise of women by a woman appeared in 1404 (Christine de Pizan’s Book of the City of Ladies), launching a lengthy tradition of “exemplary biographical collections of women.” Booth identified more than 900 voluanonymous was toomes of prosopography published during what might be termed the heyday of the genre, 1830-1940, when the rise of the middle class and increased literacy combined with relatively cheap production of books to make such volumes both practicable and popular. Booth also points out, that lest we consign the genre to the realm of mere curiosity, predating the invention of “women’s history” the compilers, editrixes or authors of these volumes considered them a contribution to “national history” and indeed Booth concludes that the volumes were “indispensable aids in the formation of nationhood.”

    Booth compiled a list of the most frequently mentioned women in a subset of these books and tracked their frequency over time.  In an exemplary project, she made this data available on the web, allowing for the creation of the visualization below of American figures on that chart.

    booth data by date

    This chart makes clear what historians already know, notability is historically specific and contingent, something Wikipedia does not take into account in formulating guidelines that take this to be a stable concept.

    Only Pocahontas deviates from the great white woman school of history and she too becomes less salient over time.  Furthermore, by the standards of this era, at least as represented by these books, black women were largely considered un-notable. This perhaps explains why, in 1894, Gertrude Mossell published The Work of the Afro-American Woman, a compilation of achievements that she described as “historical in character.” Mossell’s volume itself is a rich source of information of women worthy of commemoration and commendation.

    Looking further into the twentieth-century, the successor to this sort of volume is aptly titled, Notable American Women, a three-volume set that while published in 1971 had its roots in the 1950s when Arthur Schlesinger, as head of Radcliffe’s College council, suggested that a biographical dictionary of women might be a useful thing. Perhaps predictably, a publisher could not be secured, so Radcliffe funded the project itself. The question then becomes does inclusion in a volume declaring women as “notable” mean that these women would meet Wikipedia’s “notability” standards?

    Studies have found varying degrees of bias in coverage of female figures compared to male figures. The latest numbers I found, as of January 2015, concluded that women constituted only 15.5 percent of the biographical entries on the English Wikipedia, and that prior to the 20th century, the problem was wildly exacerbated by “sourcing and notability issues.” Using the “missing” biographies concept borrowed from a 2010 study of Wikipedia’s “completeness,” I compared selected “classified” areas for biographies of Notable American Women (analysis was conducted by hand with tremendous assistance from Rosalba Ugliuzza).

    Working with the digitized copy of Notable American Women in Women and Social Movements, I began compiling a “missing” biographies quotient,  the percentage of entries missing for individuals by the “classified list of biographies” that appeared at the end of the third volume of Notable American Women. Mirroring the well-known category issues of Wikipedia, the editors finessed the difficulties of limiting individuals to one area by including them in multiple, including a section called “Negro Women” and another called “Indian Women”:

    missing for blog

    Initially I had suspected that larger classifications might have a greater percentage of missing entries, but that is not true. Social workers, the classification with the highest percentage of missing entries, is a relatively small classification with only nine individuals. The six classifications with no missing entries ranged in size from five to eleven.  I then created my own meta-categories to summarize what larger classifications might exacerbate this “missing” biographies problem.

    legend missing blog

    Inclusion in Notable American Women does not translate into inclusion in Wikipedia.   Influential individuals associated with female-dominated professions, social work and nursing, are less likely to be considered notable, as are those “leaders” in settlement houses or welfare work or “reformers” like peace advocates.   Perhaps due to edit-a-thons or Wikipedians-in-residence, female artists and female scientists have fared quite well.  Both Indian Women and Negro Women have the same percentage of missing women.

    Looking at the network of “Negro Women” by their Notable American Women classified entries, I noted their centrality. Frances Harper and Ida B. Wells are the most networked women in the volumes, which is representative of their position as bridge leaders (I also noted the centrality of Frances Gage, who does not have a Wikipedia entry yet, a fate she shares with the white abolitionists Sallie Holley and Caroline Putnam).

    negro network colors

    Visualizing further, I located two women who don’t have Wikipedia entries and are not included in Notable American Women:

    missing negro women

    Eva del Vakia Bowles was a long time YWCA worker who spent her life trying to improve interracial relations. She was the first black woman hired by the YWCA to head a branch. During WWI, Bowles had charge of Y’s established near war work factories to provide R & R for workers. Throughout her tenure at the Y, Bowles pressed the organization to promote black women to positions within the organization. In 1932 she resigned from her beloved Y in protest over policies she believed excluded black women from the decision making processes of the National Board.

    Addie D. Waites Hunton, also a Y worker and founding member of the NAACP, was an amazing woman who along with her friend Kathryn Magnolia Johnson authored Two Colored Women with the American Expeditionary Forces (1920), which details their time as Y workers in WWI where they were among the very first black women sent. Later, she became a field worker for the NAACP, a member of the WILPF, and was an observer in Haiti in 1926 as part of that group

    Finally, using a methodology I developed when working on the racially-biased History of Woman Suffrage, I scraped names from Mossell’s The Work of the Afro-American Woman to find women that should have appeared in Notable American Women and in Wikipedia. Although this is rough result of named extractions, it gave me a place to start.

    overlaps negro women

    Alice Dugged Cary does not appear in Notable American Women or Wikipedia.  She was born free in 1859 became president of the State Federation of Colored Women of Georgia, librarian of first branch for African Americans in Atlanta, established first free kindergartens for African American children in Georgia, nominated as honorary member in Zeta Phi Beta and was involved in its spread.

    Similarly, Lucy Ella Moten, born free in 1851, became principal of Miner Normal School, earned an M.D., and taught in the South during summer “vacations, appears in neither Notable American Women nor Wikipedia (or at least she didn’t until Mike Lyons started her page yesterday at the editathon!).

    _____

    Michelle Moravec (@ProfessMoravec) is Associate Professor of History at Rosemont College. She is a prominent digital historian and the digital history editor for Women and Social Movements. Her current project, The Politics of Women’s Culture, uses a combination of digital and traditional approaches to produce an intellectual history of the concept of women’s culture. She writes a monthly column for the Mid-Atlantic Regional Center for the Humanities, and maintains her own blog History in the City, at which an earlier version of this post first appeared.

    Back to the essay

  • The Human Condition and The Black Box Society

    The Human Condition and The Black Box Society

    Frank Pasquale, The Black Box Society (Harvard University Press, 2015)a review of Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)
    by Nicole Dewandre
    ~

    1. Introduction

    This review is informed by its author’s specific standpoint: first, a lifelong experience in a policy-making environment, i.e. the European Commission; and, second, a passion for the work of Hannah Arendt and the conviction that she has a great deal to offer to politics and policy-making in this emerging hyperconnected era. As advisor for societal issues at DG Connect, the department of the European Commission in charge of ICT policy at EU level, I have had the privilege of convening the Onlife Initiative, which explored the consequences of the changes brought about by the deployment of ICTs on the public space and on the expectations toward policy-making. This collective thought exercise, which took place in 2012-2013, was strongly inspired by Hannah Arendt’s 1958 book The Human Condition.

    This is the background against which I read the The Black Box Society: The Secret Algorithms Behind Money and Information by Frank Pasquale (references to which are indicated here parenthetically by page number). Two of the meanings of “black box“—a device that keeps track of everything during a flight, on the one hand, and the node of a system that prevents an observer from identifying the link(s) between input and output, on the other hand—serve as apt metaphors for today’s emerging Big Data environment.

    Pasquale digs deep into three sectors that are at the root of what he calls the black box society: reputation (how we are rated and ranked), search (how we use ratings and rankings to organize the world), and finance (money and its derivatives, whose flows depend crucially on forms of reputation and search). Algorithms and Big Data have permeated these three activities to a point where disconnection with human judgment or control can transmogrify them into blind zombies, opening new risks, affordances and opportunities. We are far from the ideal representation of algorithms as support for decision-making. In these three areas, decision-making has been taken over by algorithms, and there is no “invisible hand” ensuring that profit-driven corporate strategies will deliver fairness or improve the quality of life.

    The EU and the US contexts are both distinct and similar. In this review, I shall not comment on Pasquale’s specific policy recommendations in detail, even if as European, I appreciate the numerous references to European law and policy that Pasquale commends as good practices (ranging from digital competition law, to welfare state provision, to privacy policies). I shall instead comment from a meta-perspective, that of challenging the worldview that implicitly undergirds policy-making on both sides of the Atlantic.

    2. A Meta-perspective on The Black Box Society

    The meta-perspective as I see it is itself twofold: (i) we are stuck with Modern referential frameworks, which hinder our ability to attend to changing human needs, desires and expectations in this emerging hyperconnected era, and (ii) the personification of corporations in policymaking reveals shortcomings in the current representation of agents as interest-led beings.

    a) Game over for Modernity!

    As stated by the Onlife Initiative in its “Onlife Manifesto,” through its expression “Game over for Modernity?“, it is time for politics and policy-making to leave Modernity behind. That does not mean going back to the Middle Ages, as feared by some, but instead stepping firmly into this new era that is coming to us. I believe with Genevieve Bell and Paul Dourish that it is more effective to consider that we are now entering into the ubiquitous computing era instead of looking at it as if it was approaching fast.[1] With the miniaturisation of devices and sensors, with mobile access to broadband internet and with the generalized connectivity of objects as well as of people, not only do we witness an increase of the online world, but, more fundamentally, a collapse of the distinction between the online and the offline worlds, and therefore a radically new socio-technico-natural compound. We live in an environment which is increasingly reactive and talkative as a result of the intricate mix between off-line and online universes. Human interactions are also deeply affected by this new socio-technico-natural compound, as they are or will soon be “sticky”, i.e. leave a material trace by default and this for the first time in history. These new affordances and constraints destabilize profoundly our Modern conceptual frameworks, which rely on distinctions that are blurring, such as the one between the real and the virtual or the ones between humans, artefacts and nature, understood with mental categories dating back from the Enlightenment and before. The very expression “post-Modern” is not accurate anymore or is too shy, as it continues to position Modernity as its reference point. It is time to give a proper name to this new era we are stepping into, and hyperconnectivity may be such a name.

    Policy-making however continues to rely heavily on Modern conceptual frameworks, and this not only from the policy-makers’ point of view but more widely from all those engaging in the public debate. There are many structuring features of the Modern conceptual frameworks and it goes certainly beyond this review to address them thoroughly. However, when it comes to addressing the challenges described by The Black Box Society, it is important to mention the epistemological stance that has been spelled out brilliantly by Susan H. Williams in her Truth, Autonomy, and Speech: Feminist Theory and the First Amendment: “the connection forged in Cartesianism between knowledge and power”[2]. Before encountering Susan Williams’s work, I came to refer to this stance less elegantly with the expression “omniscience-omnipotence utopia”[3]. Williams writes that “this epistemological stance has come to be so widely accepted and so much a part of many of our social institutions that it is almost invisible to us” and that “as a result, lawyers and judges operate largely unself-consciously with this epistemology”[4]. To Williams’s “lawyers and judges”, we should add policy-makers and stakeholders.  This Cartesian epistemological stance grounds the conviction that the world can be elucidated in causal terms, that knowledge is about prediction and control, and that there is no limit to what men can achieve provided they have the will and the knowledge. In this Modern worldview, men are considered as rational subjects and their freedom is synonymous with control and autonomy. The fact that we have a limited lifetime and attention span is out of the picture as is the human’s inherent relationality. Issues are framed as if transparency and control is all that men need to make their own way.

    1) One-Way Mirror or Social Hypergravity?

    Frank Pasquale is well aware of and has contributed to the emerging critique of transparency and he states clearly that “transparency is not just an end in itself” (8). However, there are traces of the Modern reliance on transparency as regulative ideal in the Black Box Society. One of them is when he mobilizes the one-way mirror metaphor. He writes:

    We do not live in a peaceable kingdom of private walled gardens; the contemporary world more closely resembles a one-way mirror. Important corporate actors have unprecedented knowledge of the minutiae of our daily lives, while we know little to nothing about how they use this knowledge to influence the important decisions that we—and they—make. (9)

    I refrain from considering the Big Data environment as an environment that “makes sense” on its own, provided someone has access to as much data as possible. In other words, the algorithms crawling the data can hardly be compared to a “super-spy” providing the data controller with an absolute knowledge.

    Another shortcoming of the one-way mirror metaphor is that the implicit corrective is a transparent pane of glass, so the watched can watch the watchers. This reliance on transparency is misleading. I prefer another metaphor that fits better, in my view: to characterise the Big Data environment in a hyperconnected conceptual framework. As alluded to earlier, in contradistinction to the previous centuries and even millennia, human interactions will, by default, be “sticky”, i.e. leave a trace. Evanescence of interactions, which used to be the default for millennia, will instead require active measures to be ensured. So, my metaphor for capturing the radicality and the scope of this change is a change of “social atmosphere” or “social gravity”, as it were. For centuries, we have slowly developed social skills, behaviors and regulations, i.e. a whole ecosystem, to strike a balance between accountability and freedom, in a world where “verba volant and scripta manent[5], i.e. where human interactions took place in an “atmosphere” with a 1g “social gravity”, where they were evanescent by default and where action had to be taken to register them. Now, with all interactions leaving a trace by default, and each of us going around with his, her or its digital shadow, we are drifting fast towards an era where the “social atmosphere” will be of heavier gravity, say “10g”. The challenge is huge and will require a lot of collective learning and adaptation to develop the literacy and regulatory frameworks that will recreate and sustain the balance between accountability and freedom for all agents, human and corporations.

    The heaviness of this new data density stands in-between or is orthogonal to the two phantasms of bright emancipatory promises of Big Data, on the one hand, or frightening fears of Big Brother, on the other hand. Because of this social hypergravity, we, individually and collectively, have indeed to be cautious about the use of Big Data, as we have to be cautious when handling dangerous or unknown substances. This heavier atmosphere, as it were, opens to increased possibilities of hurting others, notably through harassment, bullying and false rumors. The advent of Big Data does not, by itself, provide a “license to fool” nor does it free agents from the need to behave and avoid harming others. Exploiting asymmetries and new affordances to fool or to hurt others is no more acceptable behavior as it was before the advent of Big Data. Hence, although from a different metaphorical standpoint, I support Pasquale’s recommendations to pay increased attention to the new ways the current and emergent practices relying on algorithms in reputation, search and finance may be harmful or misleading and deceptive.

    2) The Politics of Transparency or the Exhaustive Labor of Watchdogging?

    Another “leftover” of the Modern conceptual framework that surfaces in The Black Box Society is the reliance on watchdogging for ensuring proper behavior by corporate agents. Relying on watchdogging for ensuring proper behavior nurtures the idea that it is all right to behave badly, as long as one is not seen doing do. This reinforces the idea that the qualification of an act depends from it being unveiled or not, as if as long as it goes unnoticed, it is all right. This puts the entire burden on the watchers and no burden whatsoever on the doers. It positions a sort of symbolic face-to-face between supposed mindless firms, who are enabled to pursue their careless strategies as long as they are not put under the light and people who are expected to spend all their time, attention and energy raising indignation against wrong behaviors. Far from empowering the watchers, this framing enslaves them to waste time monitoring actors who should be acting in much better ways already. Indeed, if unacceptable behavior is unveiled, it raises outrage, but outrage is far from bringing a solution per se. If, instead, proper behaviors are witnessed, then the watchers are bound to praise the doers. In both cases, watchers are stuck in a passive, reactive and specular posture, while all the glory or the shame is on the side of the doers. I don’t deny the need to have watchers, but I warn against the temptation of relying excessively on the divide between doers and watchers to police behaviors, without engaging collectively in the formulation of what proper and inappropriate behaviors are. And there is no ready-made consensus about this, so that it requires informed exchange of views and hard collective work. As Pasquale explains in an interview where he defends interpretative approaches to social sciences against quantitative ones:

    Interpretive social scientists try to explain events as a text to be clarified, debated, argued about. They do not aspire to model our understanding of people on our understanding of atoms or molecules. The human sciences are not natural sciences. Critical moral questions can’t be settled via quantification, however refined “cost benefit analysis” and other political calculi become. Sometimes the best interpretive social science leads not to consensus, but to ever sharper disagreement about the nature of the phenomena it describes and evaluates. That’s a feature, not a bug, of the method: rather than trying to bury normative differences in jargon, it surfaces them.

    The excessive reliance on watchdogging enslaves the citizenry to serve as mere “watchdogs” of corporations and government, and prevents any constructive cooperation with corporations and governments. It drains citizens’ energy for pursuing their own goals and making their own positive contributions to the world, notably by engaging in the collective work required to outline, nurture and maintain the shaping of what accounts for appropriate behaviours.

    As a matter of fact, watchdogging would be nothing more than an exhausting laboring activity.

    b) The Personification of Corporations

    One of the red threads unifying The Black Box Society’s treatment of numerous technical subjects is unveiling the oddness of the comparative postures and status of corporations, on the one hand, and people, on the other hand. As nicely put by Pasquale, “corporate secrecy expands as the privacy of human beings contracts” (26), and, in the meantime, the divide between government and business is narrowing (206). Pasquale points also to the fact that at least since 2001, people have been routinely scrutinized by public agencies to deter the threatening ones from hurting others, while the threats caused by corporate wrongdoings in 2008 gave rise to much less attention and effort to hold corporations to account. He also notes that “at present, corporations and government have united to focus on the citizenry. But why not set government (and its contractors) to work on corporate wrongdoings?” (183) It is my view that these oddnesses go along with what I would call a “sensitive inversion”. Corporations, which are functional beings, are granted sensitivity as if they were human beings, in policy-making imaginaries and narratives, while men and women, who are sensitive beings, are approached in policy-making as if they were functional beings, i.e. consumers, job-holders, investors, bearer of fundamental rights, but never personae per se. The granting of sensitivity to corporations goes beyond the legal aspect of their personhood. It entails that corporations are the one whose so-called needs are taken care of by policy makers, and those who are really addressed to, qua persona. Policies are designed with business needs in mind, to foster their competitiveness or their “fitness”. People are only indirect or secondary beneficiaries of these policies.

    The inversion of sensitivity might not be a problem per se, if it opened pragmatically to an effective way to design and implement policies which bear indeed positive effects for men and women in the end. But Pasquale provides ample evidence showing that this is not the case, at least in the three sectors he has looked at more closely, and certainly not in finance.

    Pasquale’s critique of the hypostatization of corporations and reduction of humans has many theoretical antecedents. Looking at it from the perspective of Hannah Arendt’s The Human Condition illuminates the shortcomings and risks associated with considering corporations as agents in the public space and understanding the consequences of granting them sensitivity, or as it were, human rights. Action is the activity that flows from the fact that men and women are plural and interact with each other: “the human condition of action is plurality”.[6] Plurality is itself a ternary concept made of equality, uniqueness and relationality. First, equality as what we grant to each other when entering into a political relationship. Second, uniqueness refers to the fact that what makes each human a human qua human is precisely that who s/he is is unique. If we treat other humans as interchangeable entities or as characterised by their attributes or qualities, i.e., as a what, we do not treat them as human qua human, but as objects. Last and by no means least, the third component of plurality is the relational and dynamic nature of identity. For Arendt, the disclosure of the who “can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this ‘who’ in the same manner he has and can dispose of his qualities”[7]. The who appears unmistakably to others, but remains somewhat hidden from the self. It is this relational and revelatory character of identity that confers to speech and action such a critical role and that articulates action with identity and freedom. Indeed, for entities for which the who is partly out of reach and matters, appearance in front of others, notably with speech and action, is a necessary condition of revealing that identity:

    Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: who are you? In acting and speaking, men show who they are, they appear. Revelatory quality of speech and action comes to the fore where people are with others and neither for, nor against them, that is in sheer togetherness.[8]

    So, in this sense, the public space is the arena where whos appear to other whos, personae to other personae.

    For Arendt, the essence of politics is freedom and is grounded in action, not in labour and work. The public space is where agents coexist and experience their plurality, i.e. the fact that they are equal, unique and relational. So, it is much more than the usual American pluralist (i.e., early Dahl-ian) conception of a space where agents worry for exclusively for their own needs by bargaining aggressively. In Arendt’s perspective, the public space is where agents, self-aware of their plural characteristic, interact with each other once their basic needs have been taken care of in the private sphere. As highlighted by Seyla Benhabib in The Reluctant Modernism of Hannah Arendt, “we not only owe to Hannah Arendt’s political philosophy the recovery of the public as a central category for all democratic-liberal politics; we are also indebted to her for the insight that the public and the private are interdependent”.[9] One could not appear in public if s/he or it did not have also a private place, notably to attend to his, her or its basic needs for existence. In Arendtian terms, interactions in the public space take place between agents who are beyond their satiety threshold. Acknowledging satiety is a precondition for engaging with others in a way that is not driven by one’s own interest, but rather by their desire to act together with others—”in sheer togetherness”—and be acknowledged as who they are. If an agent perceives him-, her- or itself and behave only as a profit-maximiser or as an interest-led being, i.e. if s/he or it has no sense of satiety and no self-awareness of the relational and revelatory character of his, her or its identity, then s/he or it cannot be a “who” or an agent in political terms, and therefore, respond of him-, her- or itself. It does simply not deserve -and therefore should not be granted- the status of a persona in the public space.

    It is easy to imagine that there can indeed be no freedom below satiety, and that “sheer togetherness” would just be impossible among agents below their satiety level or deprived from having one. This is however the situation we are in, symbolically, when we grant corporations the status of persona while considering efficient and appropriate that they care only for profit-maximisation. For a business, making profit is a condition to stay alive, as for humans, eating is a condition to stay alive. However, in the name of the need to compete on global markets, to foster growth and to provide jobs, policy-makers embrace and legitimize an approach to businesses as profit-maximisers, despite the fact this is a reductionist caricature of what is allowed by the legal framework on company law[10]. So, the condition for businesses to deserve the status of persona in the public space is, no less than for men and women, to attend their whoness and honour their identity, by staying away from behaving according to their narrowly defined interests. It means also to care for the world as much, if not more, as for themselves.

    This resonates meaningfully with the quotation from Heraclitus that serves as the epigraph for The Black Box Society: “There is one world in common for those who are awake, but when men are asleep each turns away into a world of his own”. Reading Arendt with Heraclitus’s categories of sleep and wakefulness, one might consider that totalitarianism arises—or is not far away—when human beings are awake in private, but asleep in public, in the sense that they silence their humanness or that their humanness is silenced by others when appearing in public. In this perspective, the merging of markets and politics—as highlighted by Pasquale—could be seen as a generalized sleep in the public space of human beings and corporations, qua personae, while all awakened activities are taking place in the private, exclusively driven by their needs and interests.

    In other words—some might find a book like The Black Box Society, which offers a bold reform agenda for numerous agencies, to be too idealistic. But in my view, it falls short of being idealistic enough: there is a missing normative core to the proposals in the book, which can be corrected by democratic, political, and particularly Arendtian theory. If a populace has no acceptance of a certain level of goods and services prevailing as satiating its needs, and if it distorts the revelatory character of identity into an endless pursuit of a limitless growth, it cannot have the proper lens and approach to formulate what it takes to enable the fairness and fair play described in The Black Box Society.

    3. Stepping into Hyperconnectivity

    1) Agents as Relational Selves

    A central feature of the Modern conceptual framework underlying policymaking is the figure of the rational subject as political proxy of humanness. I claim that this is not effective anymore in ensuring a fair and flourishing life for men and women in this emerging hyperconnected era and that we should adopt instead the figure of a “relational self” as it emerges from the Arendtian concept of plurality.

    The concept of the rational subject was forged to erect Man over nature. Nowadays, the problem is not so much to distinguish men from nature, but rather to distinguish men—and women—from artefacts. Robots come close to humans and even outperform them, if we continue to define humans as rational subjects. The figure of the rational subject is torn apart between “truncated gods”—when Reason is considered as what brings eventually an overall lucidity—on the one hand, and “smart artefacts”—when reason is nothing more than logical steps or algorithms—on the other hand. Men and women are neither “Deep Blue” nor mere automatons. In between these two phantasms, the humanness of men and women is smashed. This is indeed what happens in the Kafkaesque and ridiculous situations where a thoughtless and mindless approach to Big Data is implemented, and this from both stance, as workers and as consumers. As far as the working environment is concerned, “call centers are the ultimate embodiment of the panoptic workspace. There, workers are monitored all the time” (35). Indeed, this type of overtly monitored working environment is nothing else that a materialisation of the panopticon. As consumers, we all see what Pasquale means when he writes that “far more [of us] don’t even try to engage, given the demoralizing experience of interacting with cyborgish amalgams of drop- down menus, phone trees, and call center staff”. In fact, this mindless use of automation is only the last version of the way we have been thinking for the last decades, i.e. that progress means rationalisation and de-humanisation across the board. The real culprit is not algorithms themselves, but the careless and automaton-like human implementers and managers who act along a conceptual framework according to which rationalisation and control is all that matters. More than the technologies, it is the belief that management is about control and monitoring that makes these environments properly in-human. So, staying stuck with the rational subject as a proxy for humanness, either ends up in smashing our humanness as workers and consumers and, at best, leads to absurd situations where to be free would mean spending all our time controlling we are not controlled.

    As a result, keeping the rational subject as the central representation of humanness will increasingly be misleading politically speaking. It fails to provide a compass for treating each other fairly and making appropriate decisions and judgments, in order to impacting positively and meaningfully on human lives.

    With her concept of plurality, Arendt offers an alternative to the rational subject for defining humanness: that of the relational self. The relational self, as it emerges from the Arendtian’s concept of plurality[11], is the man, woman or agent self-aware of his, her or its plurality, i.e. the facts that (i) he, she or it is equal to his, her or its fellows; (ii) she, he or it is unique as all other fellows are unique; and (iii) his, her or its identity as a revelatory character requiring to appear among others in order to reveal itself through speech and action. This figure of the relational self accounts for what is essential to protect politically in our humanness in a hyperconnected era, i.e. that we are truly interdependent from the mutual recognition that we grant to each other and that our humanity is precisely grounded in that mutual recognition, much more than in any “objective” difference or criteria that would allow an expert system to sort out human from non-human entities.

    The relational self, as arising from Arendt’s plurality, combines relationality and freedom. It resonates deeply with the vision proposed by Susan H. Williams, i.e. the relational model of truth and the narrative model to autonomy, in order to overcome the shortcomings of the Cartesian and liberal approaches to truth and autonomy without throwing the baby, i.e. the notion of agency and responsibility, out with the bathwater, as the social constructionist and feminist critique of the conceptions of truth and autonomy may be understood of doing.[12]

    Adopting the relational self as the canonical figure of humanness instead of the rational subject‘s one puts under the light the direct relationship between the quality of interactions, on the one hand, and the quality of life, on the other hand. In contradistinction with transparency and control, which are meant to empower non-relational individuals, relational selves are self-aware that they are in need of respect and fair treatment from others, instead. It also makes room for vulnerability, notably the vulnerability of our attentional spheres, and saturation, i.e. the fact that we have a limited attention span, and are far from making a “free choice” when clicking on “I have read and accept the Terms & Conditions”. Instead of transparency and control as policy ends in themselves, the quality of life of relational selves and the robustness of the world they construct together and that lies between them depend critically on being treated fairly and not being fooled.

    It is interesting to note that the word “trust” blooms in policy documents, showing that the consciousness of the fact that we rely from each other is building up. Referring to trust as if it needed to be built is however a signature of the fact that we are in transition from Modernity to hyperconnectivity, and not yet fully arrived. By approaching trust as something that can be materialized we look at it with Modern eyes. As “consent is the universal solvent” (35) of control, transparency-and-control is the universal solvent of trust. Indeed, we know that transparency and control nurture suspicion and distrust. And that is precisely why they have been adopted as Modern regulatory ideals. Arendt writes: “After this deception [that we were fooled by our senses], suspicions began to haunt Modern man from all sides”[13]. So, indeed, Modern conceptual frameworks rely heavily on suspicion, as a sort of transposition in the realm of human affairs of the systematic doubt approach to scientific enquiries. Frank Pasquale quotes moral philosopher Iris Murdoch for having said: “Man is a creature who makes pictures of himself and then comes to resemble the picture” (89). If she is right—and I am afraid she is—it is of utmost importance to shift away from picturing ourselves as rational subjects and embrace instead the figure of relational selves, if only to save the fact that trust can remain a general baseline in human affairs. Indeed, if it came true that trust can only be the outcome of a generalized suspicion, then indeed we would be lost.

    Besides grounding the notion of relational self, the Arendtian concept of plurality allows accounting for interactions among humans and among other plural agents, which are beyond fulfilling their basic needs (necessity) or achieving goals (instrumentality), and leads to the revelation of their identities while giving rise to unpredictable outcomes. As such, plurality enriches the basket of representations for interactions in policy making. It brings, as it were, a post-Modern –or should I dare saying a hyperconnected- view to interactions. The Modern conceptual basket for representations of interactions includes, as its central piece, causality. In Modern terms, the notion of equilibrium is approached through a mutual neutralization of forces, either with the invisible hand metaphor, or with Montesquieu’s division of powers. The Modern approach to interactions is either anchored into the representation of one pole being active or dominating (the subject) and the other pole being inert or dominated (nature, object, servant) or, else, anchored in the notion of conflicting interests or dilemmas. In this framework, the notion of equality is straightjacketed and cannot be embodied. As we have seen, this Modern straitjacket leads to approaching freedom with control and autonomy, constrained by the fact that Man is, unfortunately, not alone. Hence, in the Modern approach to humanness and freedom, plurality is a constraint, not a condition, while for relational selves, freedom is grounded in plurality.

    2) From Watchdogging to Accountability and Intelligibility

    If the quest for transparency and control is as illusory and worthless for relational selves, as it was instrumental for rational subjects, this does not mean that anything goes. Interactions among plural agents can only take place satisfactorily if basic and important conditions are met.  Relational selves are in high need of fairness towards themselves and accountability of others. Deception and humiliation[14] should certainly be avoided as basic conditions enabling decency in the public space.

    Once equipped with this concept of the relational self as the canonical figure of what can account for political agents, be they men, women, corporations and even States. In a hyperconnected era, one can indeed see clearly why the recommendations Pasquale offers in his final two chapters “Watching (and Improving) the Watchers” and “Towards an Intelligible Society,” are so important. Indeed, if watchdogging the watchers has been criticized earlier in this review as an exhausting laboring activity that does not deliver on accountability, improving the watchers goes beyond watchdogging and strives for a greater accountability. With regard to intelligibility, I think that it is indeed much more meaningful and relevant than transparency.

    Pasquale invites us to think carefully about regimes of disclosure, along three dimensions:  depth, scope and timing. He calls for fair data practices that could be enhanced by establishing forms of supervision, of the kind that have been established for checking on research practices involving human subjects. Pasquale suggests that each person is entitled to an explanation of the rationale for the decision concerning them and that they should have the ability to challenge that decision. He recommends immutable audit logs for holding spying activities to account. He calls also for regulatory measures compensating for the market failures arising from the fact that dominant platforms are natural monopolies. Given the importance of reputation and ranking and the dominance of Google, he argues that the First Amendment cannot be mobilized as a wild card absolving internet giants from accountability. He calls for a “CIA for finance” and a “Corporate NSA,” believing governments should devote more effort to chasing wrongdoings from corporate actors. He argues that the approach taken in the area of Health Fraud Enforcement could bear fruit in finance, search and reputation.

    What I appreciate in Pasquale’s call for intelligibility is that it does indeed calibrate the needs of relational selves to interact with each other, to make sound decisions and to orient themselves in the world. Intelligibility is different from omniscience-omnipotence. It is about making sense of the world, while keeping in mind that there are different ways to do so. Intelligibility connects relational selves to the world surrounding them and allows them to act with other and move around. In the last chapter, Pasquale mentions the importance of restoring trust and the need to nurture a public space in the hyperconnected era. He calls for an end game to the Black Box. I agree with him that conscious deception inherently dissolves plurality and the common world, and needs to be strongly combatted, but I think that a lot of what takes place today goes beyond that and is really new and unchartered territories and horizons for humankind. With plurality, we can also embrace contingency in a less dramatic way that we used to in the Modern era. Contingency is a positive approach to un-certainty. It accounts for the openness of the future. The very word un-certainty is built in such a manner that certainty is considered the ideal outcome.

    4. WWW, or Welcome to the World of Women or a World Welcoming Women[15]

    To some extent, the fears of men in a hyperconnected era reflect all-too-familiar experiences of women. Being objects of surveillance and control, exhausting laboring without rewards and being lost through the holes of the meritocracy net, being constrained in a specular posture of other’s deeds: all these stances have been the fate of women’s lives for centuries, if not millennia. What men fear from the State or from “Big (br)Other”, they have experienced with men. So, welcome to world of women….

    But this situation may be looked at more optimistically as an opportunity for women’s voices and thoughts to go mainstream and be listened to. Now that equality between women and men is enshrined in the political and legal systems of the EU and the US, concretely, women have been admitted to the status of “rational subject”, but that does not dissolve its masculine origin, and the oddness or uneasiness for women to embrace this figure. Indeed, it was forged by men with men in mind, women, for those men, being indexed on nature. Mainstreaming the figure of the relational self, born in the mind of Arendt, will be much more inspiring and empowering for women, than was the rational subject. In fact, this enhances their agency and the performativity of their thoughts and theories. So, are we heading towards a world welcoming women?

    In conclusion, the advent of Big Data can be looked at in two ways. The first one is to look at it as the endpoint of the materialisation of all the promises and fears of Modern times. The second one is to look at it as a wake-up call for a new beginning; indeed, by making obvious the absurdity or the price of going all the way down to the consequences of the Modern conceptual frameworks, it calls on thinking on new grounds about how to make sense of the human condition and make it thrive. The former makes humans redundant, is self-fulfilling and does not deserve human attention and energy. Without any hesitation, I opt for the latter, i.e. the wake-up call and the new beginning.

    Let’s engage in this hyperconnected era bearing in mind Virginia Woolf’s “Think we must”[16] and, thereby, shape and honour the human condition in the 21st century.
    _____

    Nicole Dewandre has academic degrees in engineering, economics and philosophy. She is a civil servant in the European Commission, since 1983. She was advisor to the President of the Commission, Jacques Delors, between 1986 and 1993. She then worked in the EU research policy, promoting gender equality, partnership with civil society and sustainability issues. Since 2011, she has worked on the societal issues related to the deployment of ICT technologies. She has published widely on organizational and political issues relating to ICTs.

    The views expressed in this article are the sole responsibility of the author and in no way represent the view of the European Commission and its services.

    Back to the essay
    _____

    Acknowledgments: This review has been made possible by the Faculty of Law of the University of Maryland in Baltimore, who hosted me as a visiting fellow for the month of September 2015. I am most grateful to Frank Pasquale, first for having written this book, but also for engaging with me so patiently over the month of September and paying so much attention to my arguments, even suggesting in some instances the best way for making my points, when I was diverging from his views. I would also like to thank Jérôme Kohn, director of the Hannah Arendt Center at the New School for Social Research, for his encouragements in pursuing the mobilisation of Hannah Arendt’s legacy in my professional environment. I am also indebted, and notably for the conclusion, to the inspiring conversations I have had with Shauna Dillavou, excecutive director of CommunityRED, and Soraya Chemaly, Washington-based feminist writer, critic and activist. Last, and surely not least, I would like to thank David Golumbia for welcoming this piece in his journal and for the care he has put in editing this text written by a non-English native speaker.

    [1] This change of perspective, in itself, has the interesting side effect to take the carpet under the feet of those “addicted to speed”, as Pasquale is right when he points to this addiction (195) as being one of the reasons “why so little is being done” to address the challenges arising from the hyperconnected era.

    [2] Williams, Truth, Autonomy, and Speech, New York: New York University Press, 2004 (35).

    [3] See, e.g., Nicole Dewandre, ‘Rethinking the Human Condition in a Hyperconnected Era: Why Freedom Is Not About Sovereignty But About Beginnings’, in The Onlife Manifesto, ed. Luciano Floridi, Springer International Publishing, 2015 (195–215).

    [4]Williams, Truth, Autonomy, and Speech (32).

    [5] Literally: “spoken words fly; written ones remain”

    [6] Apart from action, Arendt distinguishes two other fundamental human activities that together with action account for the vita activa. These two other activities are labour and work. Labour is the activity that men and women engage in to stay alive, as organic beings: “the human condition of labour is life itself”. Labour is totally pervaded by necessity and processes. Work is the type of activity men and women engage with to produce objects and inhabit the world: “the human condition of work is worldliness”. Work is pervaded by a means-to-end logic or an instrumental rationale.

    [7] Arendt, The Human Condition, 1958; reissued, University of Chicago Press, 1998 (159).

    [8] Arendt, The Human Condition (160).

    [9] Seyla Benhabib, The Reluctant Modernism of Hannah Arendt, Revised edition, Lanham, MD: Rowman & Littlefield Publishers, 2003, (211).

    [10] See notably the work of Lynn Stout and the Frank Bold Foundation’s project on the purpose of corporations.

    [11] This expression has been introduced in the Onlife Initiative by Charles Ess, but in a different perspective. The Ess’ relational self is grounded in pre-Modern and Eastern/oriental societies. He writes: “In “Western” societies, the affordances of what McLuhan and others call “electric media,” including contemporary ICTs, appear to foster a shift from the Modern Western emphases on the self as primarily rational, individual, and thereby an ethically autonomous moral agent towards greater (and classically “Eastern” and pre-Modern) emphases on the self as primarily emotive, and relational—i.e., as constituted exclusively in terms of one’s multiple relationships, beginning with the family and extending through the larger society and (super)natural orders”. Ess, in Floridi, ed.,  The Onlife Manifesto (98).

    [12] Williams, Truth, Autonomy, and Speech.

    [13] Hannah Arendt and Jerome Kohn, Between Past and Future, Revised edition, New York: Penguin Classics, 2006 (55).

    [14] See Richard Rorty, Contingency, Irony, and Solidarity, New York: Cambridge University Press, 1989.

    [15] I thank Shauna Dillavou for suggesting these alternate meanings for “WWW.”

    [16] Virginia Woolf, Three Guineas, New York: Harvest, 1966.

  • Coding Bootcamps and the New For-Profit Higher Ed

    Coding Bootcamps and the New For-Profit Higher Ed

    By Audrey Watters
    ~
    After decades of explosive growth, the future of for-profit higher education might not be so bright. Or, depending on where you look, it just might be…

    In recent years, there have been a number of investigations – in the media, by the government – into the for-profit college sector and questions about these schools’ ability to effectively and affordably educate their students. Sure, advertising for for-profits is still plastered all over the Web, the airwaves, and public transportation, but as a result of journalistic and legal pressures, the lure of these schools may well be a lot less powerful. If nothing else, enrollment and profits at many for-profit institutions are down.

    Despite the massive amounts of money spent by the industry to prop it up – not just on ads but on lobbying and legal efforts, the Obama Administration has made cracking down on for-profits a centerpiece of its higher education policy efforts, accusing these schools of luring students with misleading and overblown promises, often leaving them with low-status degrees sneered at by employers and with loans students can’t afford to pay back.

    But the Obama Administration has also just launched an initiative that will make federal financial aid available to newcomers in the for-profit education sector: ed-tech experiments like “coding bootcamps” and MOOCs. Why are these particular for-profit experiments deemed acceptable? What do they do differently from the much-maligned for-profit universities?

    School as “Skills Training”

    In many ways, coding bootcamps do share the justification for their existence with for-profit universities. That is, they were founded in order to help to meet the (purported) demands of the job market: training people with certain technical skills, particularly those skills that meet the short-term needs of employers. Whether they meet students’ long-term goals remains to be seen.

    I write “purported” here even though it’s quite common to hear claims that the economy is facing a “STEM crisis” – that too few people have studied science, technology, engineering, or math and employers cannot find enough skilled workers to fill jobs in those fields. But claims about a shortage of technical workers are debatable, and lots of data would indicate otherwise: wages in STEM fields have remained flat, for example, and many who graduate with STEM degrees cannot find work in their field. In other words, the crisis may be “a myth.”

    But it’s a powerful myth, and one that isn’t terribly new, dating back at least to the launch of the Sputnik satellite in 1957 and subsequent hand-wringing over the Soviets’ technological capabilities and technical education as compared to the US system.

    There are actually a number of narratives – some of them competing narratives – at play here in the recent push for coding bootcamps, MOOCs, and other ed-tech initiatives: that everyone should go to college; that college is too expensive – “a bubble” in the Silicon Valley lexicon; that alternate forms of credentialing will be developed (by the technology sector, naturally); that the tech sector is itself a meritocracy, and college degrees do not really matter; that earning a degree in the humanities will leave you unemployed and burdened by student loan debt; that everyone should learn to code. Much like that supposed STEM crisis and skill shortage, these narratives might be powerful, but they too are hardly provable.

    Nor is the promotion of a more business-focused education that new either.

    Image credits

    Career Colleges: A History

    Foster’s Commercial School of Boston, founded in 1832 by Benjamin Franklin Foster, is often recognized as the first school established in the United States for the specific purpose of teaching “commerce.” Many other commercial schools opened on its heels, most located in the Atlantic region in major trading centers like Philadelphia, Boston, New York, and Charleston. As the country expanded westward, so did these schools. Bryant & Stratton College was founded in Cleveland in 1854, for example, and it established a chain of schools, promising to open a branch in every American city with a population of more than 10,000. By 1864, it had opened more than 50, and the chain is still in operation today with 18 campuses in New York, Ohio, Virginia, and Wisconsin.

    The curriculum of these commercial colleges was largely based around the demands of local employers alongside an economy that was changing due to the Industrial Revolution. Schools offered courses in bookkeeping, accounting, penmanship, surveying, and stenography. This was in marketed contrast to those universities built on a European model, which tended to teach topics like theology, philosophy, and classical language and literature. If these universities were “elitist,” the commercial colleges were “popular” – there were over 70,000 students enrolled in them in 1897, compared to just 5800 in colleges and universities – something that highlights what’s a familiar refrain still today: that traditional higher ed institutions do not meet everyone’s needs.

    Image credits

    The existence of the commercial colleges became intertwined in many success stories of the nineteenth century: Andrew Carnegie attended night school in Pittsburgh to learn bookkeeping, and John D. Rockefeller studied banking and accounting at Folsom’s Commercial College in Cleveland. The type of education offered at these schools was promoted as a path to become a “self-made man.”

    That’s the story that still gets told: these sorts of classes open up opportunities for anyone to gain the skills (and perhaps the certification) that will enable upward mobility.

    It’s a story echoed in the ones told about (and by) John Sperling as well. Born into a working class family, Sperling worked as a merchant marine, then attended community college during the day and worked as a gas station attendant at night. He later transferred to Reed College, went on to UC Berkeley, and completed his doctorate at Cambridge University. But Sperling felt as though these prestigious colleges catered to privileged students; he wanted a better way for working adults to be able to complete their degrees. In 1976, he founded the University of Phoenix, one of the largest for-profit colleges in the US which at its peak in 2010 enrolled almost 600,000 students.

    Other well-known names in the business of for-profit higher education: Walden University (founded in 1970), Capella University (founded in 1993), Laureate Education (founded in 1999), Devry University (founded in 1931), Education Management Corporation (founded in 1962), Strayer University (founded in 1892), Kaplan University (founded in 1937 as The American Institute of Commerce), and Corinthian Colleges (founded in 1995 and defunct in 2015).

    It’s important to recognize the connection of these for-profit universities to older career colleges, and it would be a mistake to see these organizations as distinct from the more recent development of MOOCs and coding bootcamps. Kaplan, for example, acquired the code school Dev Bootcamp in 2014. Laureate Education is an investor in the MOOC provider Coursera. The Apollo Education Group, the University of Phoenix’s parent company, is an investor in the coding bootcamp The Iron Yard.

    Image credits

    Promises, Promises

    Much like the worries about today’s for-profit universities, even the earliest commercial colleges were frequently accused of being “purely business speculations” – “diploma mills” – mishandled by administrators who put the bottom line over the needs of students. There were concerns about the quality of instruction and about the value of the education students were receiving.

    That’s part of the apprehension about for-profit universities’ (almost most) recent manifestations too: that these schools are charging a lot of money for a certification that, at the end of the day, means little. But at least the nineteenth century commercial colleges were affordable, UC Berkley history professor Caitlin Rosenthal argues in a 2012 op-ed in Bloomberg,

    The most common form of tuition at these early schools was the “life scholarship.” Students paid a lump sum in exchange for unlimited instruction at any of the college’s branches – $40 for men and $30 for women in 1864. This was a considerable fee, but much less than tuition at most universities. And it was within reach of most workers – common laborers earned about $1 per day and clerks’ wages averaged $50 per month.

    Many of these “life scholarships” promised that students who enrolled would land a job – and if they didn’t, they could always continue their studies. That’s quite different than the tuition at today’s colleges – for-profit or not-for-profit – which comes with no such guarantee.

    Interestingly, several coding bootcamps do make this promise. A 48-week online program at Bloc will run you $24,000, for example. But if you don’t find a job that pays $60,000 after four months, your tuition will be refunded, the startup has pledged.

    Image credits

    According to a recent survey of coding bootcamp alumni, 66% of graduates do say they’ve found employment (63% of them full-time) in a job that requires the skills they learned in the program. 89% of respondents say they found a job within 120 days of completing the bootcamp. Yet 21% say they’re unemployed – a number that seems quite high, particularly in light of that supposed shortage of programming talent.

    For-Profit Higher Ed: Who’s Being Served?

    The gulf between for-profit higher ed’s promise of improved job prospects and the realities of graduates’ employment, along with the price tag on its tuition rates, is one of the reasons that the Obama Administration has advocated for “gainful employment” rules. These would measure and monitor the debt-to-earnings ratio of graduates from career colleges and in turn penalize those schools whose graduates had annual loan payments more than 8% of their wages or 20% of their discretionary earnings. (The gainful employment rules only apply to those schools that are eligible for Title IV federal financial aid.)

    The data is still murky about how much debt attendees at coding bootcamps accrue and how “worth it” these programs really might be. According to the aforementioned survey, the average tuition at these programs is $11,852. This figure might be a bit deceiving as the price tag and the length of bootcamps vary greatly. Moreover, many programs, such as App Academy, offer their program for free (well, plus a $5000 deposit) but then require that graduates repay up to 20% of their first year’s salary back to the school. So while the tuition might appear to be low in some cases, the indebtedness might actually be quite high.

    According to Course Report’s survey, 49% of graduates say that they paid tuition out of their own pockets, 21% say they received help from family, and just 1.7% say that their employer paid (or helped with) the tuition bill. Almost 25% took out a loan.

    That percentage – those going into debt for a coding bootcamp program – has increased quite dramatically over the last few years. (Less than 4% of graduates in the 2013 survey said that they had taken out a loan). In part, that’s due to the rapid expansion of the private loan industry geared towards serving this particular student population. (Incidentally, the two ed-tech companies which have raised the most money in 2015 are both loan providers: SoFi and Earnest. The former has raised $1.2 billion in venture capital this year; the latter $245 million.)

    Image credits

    The Obama Administration’s newly proposed “EQUIP” experiment will open up federal financial aid to some coding bootcamps and other ed-tech providers (like MOOC platforms), but it’s important to underscore some of the key differences here between federal loans and private-sector loans: federal student loans don’t have to be repaid until you graduate or leave school; federal student loans offer forbearance and deferment if you’re struggling to make payments; federal student loans have a fixed interest rate, often lower than private loans; federal student loans can be forgiven if you work in public service; federal student loans (with the exception of PLUS loans) do not require a credit check. The latter in particular might help to explain the demographics of those who are currently attending coding bootcamps: if they’re having to pay out-of-pocket or take loans, students are much less likely to be low-income. Indeed, according to Course Report’s survey, the cost of the bootcamps and whether or not they offered a scholarship was one of the least important factors when students chose a program.

    Here’s a look at some coding bootcamp graduates’ demographic data (as self-reported):

    Age
    Mean Age 30.95
    Gender
    Female 36.3%
    Male 63.1%
    Ethnicity
    American Indian 1.0%
    Asian American 14.0%
    Black 5.0%
    Other 17.2%
    White 62.8%
    Hispanic Origin
    Yes 20.3%
    No 79.7%
    Citizenship
    Yes, born in the US 78.2%
    Yes, naturalized 9.7%
    No 12.2%
    Education
    High school dropout 0.2%
    High school graduate 2.6%
    Some college 14.2%
    Associate’s degree 4.1%
    Bachelor’s degree 62.1%
    Master’s degree 14.2%
    Professional degree 1.5%
    Doctorate degree 1.1%

    (According to several surveys of MOOC enrollees, these students also tend to be overwhelmingly male from more affluent neighborhoods, and MOOC students also tend to already possess Bachelor’s degrees. The median age of MITx registrants is 27.)

    It’s worth considering how the demographics of students in MOOCs and coding bootcamps may (or may not) be similar to those enrolled at other for-profit post-secondary institutions, particularly since all of these programs tend to invoke the rhetoric about “democratizing education” and “expanding access.” Access for whom?

    Some two million students were enrolled in for-profit colleges in 2010, up from 400,000 a decade earlier. These students are disproportionately older, African American, and female when compared to the entire higher ed student population. While one in 20 of all students are enrolled in a for-profit college, 1 in 10 African American students, 1 in 14 Latino students, and 1 in 14 first-generation college students are enrolled at a for-profit. Students at for-profits are more likely to be single parents. They’re less likely to enter with a high school diploma. Dependent students in for-profits have about half as much family income as students in not-for-profit schools. (This demographic data is drawn from the NCES and from Harvard University researchers David Deming, Claudia Goldin, and Lawrence Katz in their 2013 study on for-profit colleges.)

    Deming, Goldin, and Katz argue that

    The snippets of available evidence suggest that the economic returns to students who attend for-profit colleges are lower than those for public and nonprofit colleges. Moreover, default rates on student loans for proprietary schools far exceed those of other higher-education institutions.

    Image credits

    According to one 2010 report, just 22% of first- and full-time students pursuing Bachelor’s degrees at for-profit colleges in 2008 graduated, compared to 55% and 65% of students at public and private non-profit universities respectively. Of the more than 5000 career programs that the Department of Education tracks, 72% of those offered by for-profit institutions produce graduates who earn less than high school dropouts.

    For their part, today’s MOOCs and coding bootcamps also boast that their students will find great success on the job market. Coursera, for example, recently surveyed its students who’d completed one of its online courses and 72% who responded said they had experienced “career benefits.” But without the mandated reporting that comes with federal financial aid, a lot of what we know about their student population and student outcomes remains pretty speculative.

    What kind of students benefit from coding bootcamps and MOOC programs, the new for-profit education? We don’t really know… although based on the history of higher education and employment, we can guess.

    EQUIP and the New For-Profit Higher Ed

    On October 14, the Obama Administration announced a new initiative, the Educational Quality through Innovative Partnerships (EQUIP) program, which will provide a pathway for unaccredited education programs like coding bootcamps and MOOCs to become eligible for federal financial aid. According to the Department of Education, EQUIP is meant to open up “new models of education and training” to low income students. In a press release, it argues that “Some of these new models may provide more flexible and more affordable credentials and educational options than those offered by traditional higher institutions, and are showing promise in preparing students with the training and education needed for better, in-demand jobs.”

    The EQUIP initiative will partner accredited institutions with third-party providers, loosening the “50% rule” that prohibits accredited schools from outsourcing more than 50% of an accredited program. Since bootcamps and MOOC providers “are not within the purview of traditional accrediting agencies,” the Department of Education says, “we have no generally accepted means of gauging their quality.” So those organizations that apply for the experiment will have to provide an outside “quality assurance entity,” which will help assess “student outcomes” like learning and employment.

    By making financial aid available for bootcamps and MOOCs, one does have to wonder if the Obama Administration is not simply opening the doors for more of precisely the sort of practices that the for-profit education industry has long been accused of: expanding rapidly, lowering the quality of instruction, focusing on marketing to certain populations (such as veterans), and profiting off of taxpayer dollars.

    Who benefits from the availability of aid? And who benefits from its absence? (“Who” here refers to students and to schools.)

    Shawna Scott argues in “The Code School-Industrial Complex” that without oversight, coding bootcamps re-inscribe the dominant beliefs and practices of the tech industry. Despite all the talk of “democratization,” this is a new form of gatekeeping.

    Before students are even accepted, school admission officers often select for easily marketable students, which often translates to students with the most privileged characteristics. Whether through intentionally targeting those traits because it’s easier to ensure graduates will be hired, or because of unconscious bias, is difficult to discern. Because schools’ graduation and employment rates are their main marketing tool, they have a financial stake in only admitting students who are at low risk of long-term unemployment. In addition, many schools take cues from their professional developer founders and run admissions like they hire for their startups. Students may be subjected to long and intensive questionnaires, phone or in-person interviews, or be required to submit a ‘creative’ application, such as a video. These requirements are often onerous for anyone working at a paid job or as a caretaker for others. Rarely do schools proactively provide information on alternative application processes for people of disparate ability. The stereotypical programmer is once again the assumed default.

    And so, despite the recent moves to sanction certain ed-tech experiments, some in the tech sector have been quite vocal in their opposition to more regulations governing coding schools. It’s not just EQUIP either; there was much outcry last year after several states, including California, “cracked down” on bootcamps. Many others have framed the entire accreditation system as a “cabal” that stifles innovation. “Innovation” in this case implies alternate certificate programs – not simply Associate’s or Bachelor’s degrees – in timely, technical topics demanded by local/industry employers.

    Image credits

    The Forgotten Tech Ed: Community Colleges

    Of course, there is an institution that’s long offered alternate certificate programs in timely, technical topics demanded by local/industry employers, and that’s the community college system.

    Vox’s Libby Nelson observed that “The NYT wrote more about Harvard last year than all community colleges combined,” and certainly the conversations in the media (and elsewhere) often ignore that community colleges exist at all, even though these schools educate almost half of all undergraduates in the US.

    Like much of public higher education, community colleges have seen their funding shrink in recent decades and have been tasked to do more with less. For community colleges, it’s a lot more with a lot less. Open enrollment, for example, means that these schools educate students who require more remediation. Yet despite many community colleges students being “high need,” community colleges spend far less per pupil than do four-year institutions. Deep budget cuts have also meant that even with their open enrollment policies, community colleges are having to restrict admissions. In 2012, some 470,000 students in California were on waiting lists, unable to get into the courses they need.

    This is what we know from history: as the funding for public higher ed decreased – for two- and four-year schools alike, for-profit higher ed expanded, promising precisely what today’s MOOCs and coding bootcamps now insist they’re the first and the only schools to do: to offer innovative programs, training students in the kinds of skills that will lead to good jobs. History tells us otherwise…
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, on which an earlier version of this essay first appeared, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • "Still Ahead Somehow:" Paul Amar’s The Security Archipelago

    "Still Ahead Somehow:" Paul Amar’s The Security Archipelago

    A Review of Paul Amar’s The Security Archipelago: Human-Security States, Sexuality Politics, and the End of Neoliberalism (Durham and London: Duke University Press, 2013).

    By Neel Ahuja

    One of the most widely reported news stories of the 2011 revolution in Egypt involved sexual assaults and other physical attacks on women in Cairo’s Tahrir Square, where mass protests led to the ouster of former President Hosni Mubarak. Paul Amar’s singular book The Security Archipelago explores, among other topics, the Egyptian military council’s attempt to burnish its own authority to “rescue the nation” and its “dignity” by constructing the Arab Spring uprising as a destructive site of violence and moral degradation (3). Mirroring the racialized discourse of international news media who invoked animal metaphors to represent dissent at Tahrir as an articulation of pathological urban violence and frenzy (203), the counter-revolutionary campaign allowed the military to arrest and incarcerate protesters by associating them with demeaned markers of class status and sexuality.

    For Amar, this conjunction of moralizing statism and the militarization of social life is indicative of a particular governmental form he calls “human security,” a set of transnational juridical, political, economic, and police practices and discourses that become especially legible in sites of urban crisis and struggle. Amar names four interlocking logics that constitute human security: evangelical humanitarianism, police paramilitarism, juridical personalism, and workerist empowerment (7). He unveils these logics by constructing a dense analysis of security politics linking the megacities of Cairo and Rio de Janiero.

    The chapters explore crisis moments that reveal connections between the militarization of police, the development of urban planning and development policy, tourism, the management of labor processes, and racialized and gendered struggles over rights and citizenship. Such connections arise in crises around public protest, attempts by municipal and national authorities to market heritage (in the form of Islamic heritage architecture or samba music) to tourists, coalitions between labor and evangelical Christian groups to combat trafficking and corruption, the attempts of 9/11 plotter Muhammad Atta to develop a theory of Islamic urban planning, and the policing of city space during major international development meetings. These wide-ranging case studies ground the book’s critical security analysis in sites of struggle, making important contributions to the understanding of the spread of urban violence and progressive social policy in Brazil and the rise of left-right coalitions in Islamic urban planning and revolutionary uprisings in Egypt.

    Throughout the book, public contestation over the permissible limits of urban sexuality emerges as a key factor inciting securitization. It serves as a marker of cultural tradition, a policed indicator of urban space and capital networking, and a marker of political dissent. For Amar, the new subjects of security “are portrayed as victimized by trafficking, prostituted by ‘cultures of globalization,’ sexually harassed by ‘street’ forms of predatory masculinity, or ‘debauched’ by liberal values” (15). In this way, the “human” at the heart of “human security” is a figure rendered precarious by the public articulation of sexuality with processes of economic and social change.

    If this method of transnational scholarship showcases the unique strengths of Amar’s interdisciplinary training, Portuguese and Arabic language skills, and past work as a development specialist, it brilliantly articulates a set of connections between the cities of Rio and Cairo evident in their parallel experiences of neoliberal economic policies, redevelopment, militarization of policing, NGO intervention, and rise as significant “semiperipheral” or “first-third-world” metropoles. In contrast to racialized international relations and conflict studies scholarship that fails continually to break from the mythologies of the clash of civilizations, Amar’s book offers a fascinating analysis of how religious politics, policing, and workerist humanisms interface in the urban crises of two megacities whose representation if often overwritten by stereotyped descriptions of either oriental despotism (Cairo) or tropicalist transgression (Rio).

    These cities, in fact, share geographic, economic, and political connections that justify what Amar describes as an archipelagic method: “The practices, norms, and institutional products of [human security] struggles have… traveled across an archipelago, a metaphorical island chain, of what the private security industry calls ‘hotspots’–enclaves of panic and laboratories of control–the most hypervisible of which have emerged in Global South megacities” (15-16). The security archipelago is also a formation that includes but transcends the state; it is “parastatal” and reflects the ways in which states in the Global South, NGO activists, and state attempts to humanize security interventions have produced a set of governmentalities that attempt to incorporate and govern public challenges to austerity politics and militarism.

    As such, Amar’s book offers a two-pronged challenge to dominant theories of neoliberalism. First, it clarifies that although many of the wealthy countries still battle over a politics of austerity, the so-called Washington Consensus combining financial deregulation, privatization, and reduction of trade barriers no longer holds sway internationally or even in its spaces of origin. Indeed, Amar claims that even the Beijing Consensus — the turn since the 1990s to a strong state hand in development investment combined with the controlled growth of highly regulated markets — is being supplanted by the parastatal form of the human security regime. Second, this line of thought requires for Amar a methodological shift. Amar claims, “we can envision an end to the term neoliberalism as an overburdened and overextended interpretive lens for scholars” given “the demise, in certain locations and circuits, of a hegemonic set of market-identified subjects, locations, and ideologies of politics” (236). The Security Archipelago offers an alternative to theories of globalization that privilege imperial states as the primary forces governing the production of transnational power dynamics. Without making the common move of romanticizing a static vision of either locality or indigeneity in the conceptualization of resistance to globalization, Amar locates in the semiperiphery a crossroads between the forces of national development and transnational capital. It is in this crossroads where resistances to the violence of austerity are parlayed into new security regimes in the name of the very human endangered by capitalism’s market authoritarianism.

    It is notable that the analysis of sexuality, with its attendant moral incitements to security, largely drops out of Amar’s concluding analysis of the debates on the end of neoliberalism. He does mention sexuality when proclaiming a shift from a consuming subject to a worker in the postneoliberal transition: “postneoliberal work centers more on the fashioning of moralization, care, humanization, viable sexualities, and territories that can be occupied. And the worker can see production as the collective work of vigilance and purification, which all too often is embedded through paramilitarization and enforcement practices” (243). While the book expertly reveals the emphasis on emergent forms of moral labor and securitizing care in the public regulation of sexuality, it also documents that moral crises and policing around the sexuality of samba, for example, are layered by the nexus of gentrification, private redevelopment, and transnational tourism that commonly attract the label neoliberalism. This point does not directly undermine Amar’s argument but suggests that further discussion of sexuality’s relation to human security regimes might engender an analytic revision of the notion of postneoliberal transition. The public articulation of sexuality as the site of urban securitization might rather reveal the regeneration of intersecting consumption forms and affective labors of logics of marketization and securitization that are divided geographically but dynamically interrelated.

    The fact that Amar’s book raises this problem reveals the significance of the study for moving forward scholarship on sexuality, security, and globality — as individual objects of study and intertwined ones. As scholars focusing, for example, on homonationalist marriage practices in the global north continue to use the analytic frame of neoliberalism, Amar’s study might press for how the moral articulation of the marriage imperative exerts a securitizing force that transcends market logics. Similarly, Amar’s focus on both sexuality and the semiperiphery offer significant geographic and methodological disruptions to the literatures on neoliberalism, the rise of East Asian financial capital, and crisis theory. His unique method challenges interdisciplinary social theorizing to grapple with the archipelagic nature of contemporary forces of social precarity and securitization.

    Neel Ahuja is associate professor of postcolonial studies in the Department of English and Comparative Literature at UNC. He is the author the forthcoming Bioinsecurities: Disease Interventions, Empire, and the Government of Species (Duke UP).

  • How Ex Machina Abuses Women of Color and Nobody Cares Cause It's Smart

    How Ex Machina Abuses Women of Color and Nobody Cares Cause It's Smart

    Alex Garland, dir. & writer, Ex Machina (A24/Universal Films, 2015)a review of Alex Garland, dir. & writer, Ex Machina (A24/Universal Films, 2015)
    by Sharon Chang
    ~

    In April of this year British science fiction thriller Ex Machina opened in the US to almost unanimous rave reviews. The film was written and directed by Alex Garland, author of bestselling 1996 novel The Beach (also made into a movie) and screenwriter of 28 Days Later (2002) and Never Let Me Go (2010). Ex Machina is Garland’s directorial debut. It’s about a young white coder named Caleb who gets the opportunity to visit the secluded mountain home of his employer Nathan, pioneering programmer of the world’s most powerful search engine (Nathan’s appearance is ambiguous but he reads non-white and the actor who plays him is Guatemalan). Caleb believes the trip innocuous but quickly learns that Nathan’s home is actually a secret research facility in which the brilliant but egocentric and obnoxious genius has been developing sophisticated artificial intelligence. Caleb is immediately introduced to Nathan’s most upgraded construct–a gorgeous white fembot named Ava. And the mind games ensue.

    As the week unfolds the only things we know for sure are (a) imprisoned Ava wants to be free, and, (b) Caleb becomes completely enamored and wants to “rescue” her. Other than that, nothing is clear. What are Ava’s true intentions? Does she like Caleb back or is she just using him to get out? Is Nathan really as much an asshole as he seems or is he putting on a show to manipulate everyone? Who should we feel sorry for? Who should we empathize with? Who should we hate? Who’s the hero? Reviewers and viewers alike are melting in intellectual ecstasy over this brain-twisty movie. The Guardian calls it “accomplished, cerebral film-making”; Wired calls it “one of the year’s most intelligent and thought-provoking films”; Indiewire calls it “gripping, brilliant and sensational”. Alex Garland apparently is the smartest, coolest new director on the block. “Garland understands what he’s talking about,” says RogerEbert.com, and goes “to the trouble to explain more abstract concepts in plain language.”

    Right.

    I like sci-fi and am a fan of Garland’s previous work so I was excited to see his new flick. But let me tell you, my experience was FAR from “brilliant” and “heady” like the multitudes of moonstruck reviewers claimed it would be. Actually, I was livid. And weeks later–I’m STILL pissed. Here’s why…

    *** Spoiler Alert ***

    You wouldn’t know it from the plethora of glowing reviews out there cause she’s hardly mentioned (telling in and of itself) but there’s another prominent fembot in the film. Maybe fifteen minutes into the story we’re introduced to Kyoko, an Asian servant sex slave played by mixed-race Japanese/British actress Sonoya Mizuno. Though bound by abusive servitude, Kyoko isn’t physically imprisoned in a room like Ava because she’s compliant, obedient, willing.

    I recognized the trope of servile Asian woman right away and, how quickly Asian/whites are treated as non-white when they look ethnic in any way.

    Kyoko first appears on screen demure and silent, bringing a surprised Caleb breakfast in his room. Of course I recognized the trope of servile Asian woman right away and, as I wrote in February, how quickly Asian/whites are treated as non-white when they look ethnic in any way. I was instantly uncomfortable. Maybe there’s a point, I thought to myself. But soon after we see Kyoko serving sushi to the men. She accidentally spills food on Caleb. Nathan loses his temper, yells at her, and then explains to Caleb she can’t understand which makes her incompetence even more infuriating. This is how we learn Kyoko is mute and can’t speak. Yep. Nathan didn’t give her a voice. He further programmed her, purportedly, not to understand English.

    kyoko
    Sex slave “Kyoko” played by Japanese/British actress Sonoya Mizuno (image source: i09.com)

    I started to get upset. If there was a point, Garland had better get to it fast.

    Unfortunately the treatment of Kyoko’s character just keeps spiraling. We continue to learn more and more about her horrible existence in a way that feels gross only for shock value rather than for any sort of deconstruction, empowerment, or liberation of Asian women. She is always at Nathan’s side, ready and available, for anything he wants. Eventually Nathan shows Caleb something else special about her. He’s coded Kyoko to love dancing (“I told you you’re wasting your time talking to her. However you would not be wasting your time–if you were dancing with her”). When Nathan flips a wall switch that washes the room in red lights and music then joins a scantily-clad gyrating Kyoko on the dance floor, I was overcome by disgust:

    [youtube https://www.youtube.com/watch?v=hGY44DIQb-A?feature=player_embedded]

    I recently also wrote about Western exploitation of women’s bodies in Asia (incidentally also in February), in particular noting it was US imperialistic conquest that jump-started Thailand’s sex industry. By the 1990s several million tourists from Europe and the U.S. were visiting Thailand annually, many specifically for sex and entertainment. Writer Deena Guzder points out in “The Economics of Commercial Sexual Exploitation” for the Pulitzer Center on Crisis Reporting that Thailand’s sex tourism industry is driven by acute poverty. Women and girls from poor rural families make up the majority of sex workers. “Once lost in Thailand’s seedy underbelly, these women are further robbed of their individual agency, economic independence, and bargaining power.” Guzder gloomily predicts, “If history repeats itself, the situation for poor Southeast Asian women will only further deteriorate with the global economic downturn.”

    caption
    Red Light District, Phuket (image source: phuket.com)

    You know who wouldn’t be a stranger to any of this? Alex Garland. His first novel, The Beach, is set in Thailand and his second novel, The Tesseract, is set in the Philippines, both developing nations where Asian women continue to be used and abused for Western gain. In a 1999 interview with journalist Ron Gluckman, Garland said he made his first trip to Asia as a teenager in high school and had been back at least once or twice almost every year since. He also lived in the Philippines for 9 months. In a perhaps telling choice of words, Gluckman wrote that Garland had “been bitten by the Asian bug, early and deep.” At the time many Asian critics were criticizing The Beach as a shallow look at the region by an uniformed outsider but Garland protested in his interview:

    A lot of the criticism of The Beach is that it presents Thais as two dimensional, as part of the scenery. That’s because these people I’m writing about–backpackers–really only see them as part of the scenery. They don’t see them or the Thai culture. To them, it’s all part of a huge theme park, the scenery for their trip. That’s the point.

    I disagree severely with Garland. In insisting on his right to portray people of color one way while dismissing how those people see themselves, he not only centers his privileged perspective (i.e. white, male) but shows determined disinterest in representing oppressed people transformatively. Leads me to wonder how much he really knows or cares about inequity and uplifting marginalized voices. Indeed in Ex Machina the only point that Garland ever seems to make is that racist/sexist tropes exists, not that we’re going to do anything about them. And that kind of non-critical non-resistant attitude does more to reify and reinforce than anything else. Take for instance in a recent interview with Cinematic Essential (one of few where the interviewer asked about race), Garland had this to say about stereotypes in his new film:

    Sometimes you do things unconsciously, unwittingly, or stupidly, I guess, and the only embedded point that I knew I was making in regards to race centered around the tropes of Kyoko [Sonoya Mizuno], a mute, very complicit Asian robot, or Asian-appearing robot, because of course, she, as a robot, isn’t Asian. But, when Nathan treats the robot in the discriminatory way that he treats it, I think it should be ambivalent as to whether he actually behaves this way, or if it’s a very good opportunity to make him seem unpleasant to Caleb for his own advantage.

    First, approaching race “unconsciously” or “unwittingly” is never a good idea and moreover a classic symptom of white willful ignorance. Second, Kyoko isn’t Asian because she’s a robot? Race isn’t biological or written into human DNA. It’s socio-politically constructed and assigned usually by those in power. Kyoko is Asian because she ha been made that way not only by her oppressor, Nathan, but by Garland himself, the omniscient creator of all. Third, Kyoko represents the only embedded race point in the movie? False. There are two other women of color who play enslaved fembots in Ex Machina and their characters are abused just as badly. “Jasmine” is one of Nathan’s early fembots. She’s Black. We see her body twice. Once being instructed how to write and once being dragged lifeless across the floor. You will never recognize real-life Black model and actress Symara A. Templeman in the role however. Why? Because her always naked body is inexplicably headless when it appears. That’s right. One of the sole Black bodies/persons in the entire film does not have (per Garland’s writing and direction) a face, head, or brain.

    caption
    Symara A. Templeman, who played “Jasmine” in Ex Machina (image source: Templeman on Google+)

    “Jade” played by Asian model and actress Gana Bayarsaikhan, is presumably also a less successful fembot predating Kyoko but perhaps succeeding Jasmine. She too is always shown naked but, unlike Jasmine, she has a head, and, unlike Kyoko, she speaks. We see her being questioned repeatedly by Nathan while trapped behind glass. Jade is resistant and angry. She doesn’t understand why Nathan won’t let her out and escalates to the point we are lead to believe she is decommissioned for her defiance.

    It’s significant that Kyoko, a mixed-race Asian/white woman, later becomes the “upgraded” Asian model. It’s also significant that at the movie’s end white Ava finds Jade’s decommissioned body in a closet in Nathan’s room and skins it to cover her own body. (Remember when Katy Perry joked in 2012 she was obsessed with Japanese people and wanted to skin one?). Ava has the option of white bodies but after examining them meticulously she deliberately chooses Jade. Despite having met Jasmine previously, her Black body is conspicuously missing from the closets full of bodies Nathan has stored for his pleasure and use. And though Kyoko does help Ava kill Nathan in the end, she herself is “killed” in the process (i.e. never free) and Ava doesn’t care at all. What does all this show? A very blatant standard of beauty/desire that is not only male-designed but clearly a light, white, and violently assimilative one.

    caption
    Gana Bayarsaikhan, who played “Jade” in Ex Machina (image source: profile-models.com)

    I can’t even being to tell you how offended and disturbed I was by the treatment of women of color in this movie. I slept restlessly the night after I saw Ex Machina, woke up muddled at 2:45 AM and–still clinging to the hope that there must have been a reason for treating women of color this way (Garland’s brilliant right?)–furiously went to work reading interviews and critiques. Aside from a few brief mentions of race/gender, I found barely anything addressing the film’s obvious deployment of racialized gender stereotypes for its own benefit. For me this movie will be joining the long list of many so-called film classics I will never be able to admire. Movies where supposed artistry and brilliance are acceptable excuses for “unconscious” “unwitting” racism and sexism. Ex Machina may be smart in some ways, but it damn sure isn’t in others.

    Correction (8/1/2015): An earlier version of this post incorrectly stated that actress Symara A. Templeman was the only Black person in the film. The post has been updated to indicate that the movie also featured at least one other Black actress, Deborah Rosan, in an uncredited role as Office Manager.

    _____

    Sharon H. Chang is an author, scholar, sociologist and activist. She writes primarily on racism, social justice and the Asian American diaspora with a feminist lens. Her pieces have appeared in Hyphen Magazine, ParentMap Magazine, The Seattle Globalist, on AAPI Voices and Racism Review. Her debut book, Raising Mixed Race: Multiracial Asian Children in a Post-Racial World, is forthcoming through Paradigm Publishers as part of Joe R. Feagin’s series “New Critical Viewpoints on Society.” She also sits on the board for Families of Color Seattle and is on the planning committee for the biennial Critical Mixed Race Studies Conference. She blogs regularly at Multiracial Asian Families, where an earlier version of this post first appeared.

    The editors thank Dorothy Kim for referring us to this essay.

    Back to the essay

  • Good Wives: Algorithmic Architectures as Metabolization

    Good Wives: Algorithmic Architectures as Metabolization

    by Karen Gregory

    ~

    Text of a talk delivered at Digital Labor: Sweatshops, Picket Lines, and Barricade, New York, November 14th-16th, 2014.

    This talk has a few different starting points, which include a forum I held last March on Angela Mitropoulos’ work Contract and Contagion that explored the expansions and reconfigurations of capital, time, and work through the language of Oikonomics or the “properly productive household”, as well as the work that I was doing with Patricia Clough, Josh Scannell, and Benjamin Haber on a paper called “The Datalogical Turn”, which explores how the coupling of large scale databases and adaptive algorithms “are calling forth a new onto-logic of sociality or the social itself” as well as, I confess, no small share of binge-watching the TV show The Good Wife. So, please bear with me as I take you through my thinking here. What I am trying to do in my work of late is a form of feminist thinking that can take quite seriously not only the onto-sociality of data and the ways in which bodily practices are made to extend far and wide beyond the body, but a form of thinking that can also understand the paradox of our times: How and why has digital abundance been ushered in on the heels of massive income inequality and political dispossession? In some ways, the last part of that sentence (why inequality and political dispossession) is actually easier to account for than understanding the role that such “abundance” has played in the reconfiguration or transfers of wealth and power.

    So, let me back up her for a minute… Already in 1992, Deleuze wrote that a disciplinary society had give way to a control society. Writing, “we are in a generalized crisis in relation to all the environments of enclosure—prison, hospital, factory, school, family” and that “everyone knows that these institutions are finished, whatever the length of their expiration periods. It’s only a matter of administering their last rites and of keeping people employed until the installation of the new forces knocking at the door. These are the societies of control, which are in the process of replacing the disciplinary societies.” For Deleuze, whereas the disciplinary man was a “discontinuous producer of energy, the man of control is undulatory, in orbit, in a continuous network.” For such a human, Deleuze wrote, “surfing” has “replaced older sports.”

    We know, despite Marx’s theorization of “dead labor”, that digital, networked infrastructures have been active, even “vital”, agents of this shift from discipline to control or the shift from a capitalism of production and property to a capitalism of dispersion, a capitalism fit for circulation, relay, response, and feedback. As Deleuze writes, this is a capitalism fit for a “higher order” of production. I want to intentionally play on the words “higher word”, with their invocations of a religiosity, faith, and hierarchy, because much of our theoretical work of late has been specifically developed to help us understand the ways in which such a “higher order” has been very successful in affectively reconfiguring and reformatting bodies and environments for its own purposes. We talk often of the modulation, pre-emption, extraction, and subsumption of elements once thought to be “immaterial” or spiritual, if you will, the some-“things” that lacked a full instantiation in the material world. I do understand that I am twisting Deleuze’s words here a bit (what he meant in the Postscript was a form of production that we now think as flexible production, production on demand, or JIT production), but my thinking here is that very notion of a higher order, a form of production considered progress in itself, has been very good at making us pray toward the light and at replacing the audial sensations of the church bell/factory clock with the blinding temporality of the speed of light itself. This blinding speed of light is related to what Marx called “circulation time,” or the annihilation of space through time, and it is this black hole of capital, this higher order of production and the ways in which we have theorized its metaphysics, which I want to argue, have become the Via Negativa to a Capital that transcends thought. What I mean here is that this form of theorizing has really left us with a capital beyond reproach, a capital reinstated in and through the effects of what it is not—it is not a wage, it is not found in commodities, it is not ultimately a substance humans have access or rights to…

    In such a rapture of the higher order of the light, there has been a tendency to look away from concepts such as “foundations” or “limits” or quaint theories of units such as the “household”, but in Angela Mitropoulos’ work Contract and Contagion we find those concepts as the heart of her reading of the collapse of the time of work into that of life. For Mitropoulos, it is through the performativity and probalistic terms of “the contract” (and not simply the contract of liberal sociality, but a contract as a terms of agreement to the “right” genealogical transfer of wealth) that we should visualize the flights of capital. This broadened notion of the contract is a necessary term for fully grasping what is being brought into being on the heels of “the datalogical turn.”

    For Mitropoulos, it is the contract, which she links to the oath, the promise, the covenant, the bargain, and even faith in general, that “transforms contingency into necessity.” Contracts’ “ensuing contractualism” has been “amplified as an ontological precept.” Here, contract is fundamentally a precept that transforms life into a game (and I don’t mean simply game-ifyed, but obviously we could talk about what gameification means for our sense of what is implied in contractual relations. Liberal contracts have tended to evoke their authority from the notion of autonomous and rational subjects—this is not exactly the same subject being invoked when you’re prompted to like every picture of a cat on the internet or have your attention directed to tiny little numbers in the corner of screen to see who faved your post, although those Facebook numbers are micro-contracts. One’s you haven’t signed up for exactly.) For Mitropoulos, it is not just that contracts transform life into contingency; it is that they transform life into a game that must be played out of necessity. Taking up Pascal’s wager Mitropoulos writes,

    the materiality of contractualism is that of a performativity installed by its presumption of the inexorable necessity of contingency; a presumption established by what I refer to here as the Pascalian premise that one must ‘play the game’ necessarily, that this is the only game available. This invalidates all idealist explanations of contract, including those which echo contractualism’s voluntarism in their understanding of (revolutionary) subjectivity. Performativity is the temporality of contract, and the temporal continuity of capitalism is uncertain.

    In other words, one has no choice but to gamble. God either exists or God does not exist. Both may be possible/virtual, but only one will be real/actual and it is via the wager that one must, out of necessity, come to understand God with and through contingency. It is through such wagering that the contract—as a form of measurable risk—comes into being. Measurable risk—measure and risk as entangled in speculation— became, we might say, the Via Affirmativa of early and industrializing capital.

    This transmutation of contingency into measure sits not only at the heart the contract, but is as Mitropoulos writes, “crucial to the legitimatized forms of subjectivity and relation that have accompanied the rise and expansion of capitalism across the world.” Yet, in addition to the historical project of situating an authorial, egalitarian, liberal, willful, and autonomous subject as a universal subject, contract is also interested in something that looks much more like geometric, matrixial, spatializing, and impersonal. Contract does not solely care about “subject formation”, but also the development of positions that compose a matrix— so that the matrix is made to be an engine of production and circulation. It is interested in the creation of an infrastructure of contracts, or points of contact that reconfigure a “divine” order in the face of contingency.

    The production of such a divine order is what Mitropolous will link back to Oikonomia or the economics of the household, whereby bodies are parsed both spatially and socially into those who may enter into contract and those who may not. While contract becomes increasingly a narrow domain of human relations, Oikonomia is the intentional distribution and classification of bodies—humans, animal, mineral— to ensure the “proper” (i.e. moral, economic, and political) functioning of the household, which functions like molar node within the larger matrix. Given that contingency has been installed as the game that must be played, contract then comes to enforces a chain of being predicated on forms of naturalized servitude and obligation to the game. These are forms of naturalized servitude that are simultaneously built into the architecture of the household, as well as made invisible. As Anne Boyer has written in regard to the Greek household it, probably looked like this:

    In the front of the household were the women’s rooms—the gynaikonitis. Behind these were the common areas and the living quarters for the men—the andronitis. It was there one could find the libraries. The men’s area, along with the household, was also wherever was outside of the household—that is, the free man’s area was the oikos and the polis and was the world. The oikos was always at least a double space, and doubly perceived, just as what is outside of it was always a singular territory on which slaves and women trespassed. The singular nature of the outside was enforced by violence or the threat of it. The free men’s home was the women’s factory; also—for women and slaves—their factory was a home on its knees.

    This is not simply a division of labor, but as Boyer writes, “God made of women an indoor body, and made of men an outdoor one. And this scheme—what becomes, in future iterations, public and private, of production and reproduction, of waged work and unpaid servitude—is the order agreed upon to attend to the risk posed by those who make the oikos.”

    This is the order that we believe has given way as Fordism morphed into Post-Fordism and as the walls of these architectures have been smoothed by the flows of endlessly circulated, derivative, financialized capital. Yet, what Mitropoulos’ work points us toward is the persistence of the contract. Walls may crumble, but the foundations of contract re-instantiate, if not proliferate, in the wake of capital’s discovery of new terrains. The gynaikonitis with its function to parse and delineate the labor of the household into a hierarchy of care work—from the wifely householding of management to the slave-like labor of “being ready to hand”— does not simply evaporate, but rather finds new instantiations among the flights of capital and new instantiations within its very infrastructure. Following Mitropoulos, we can argue that while certain forms of disciplinary seemingly come to an end, there is no shift to control without a proliferating matrix of contract whose function is to re-impose the very meaning—or rather, the very ontological necessity, of measure. It is through the persistent re-imposition of measure that a logic of the Oikos is never lost, ensuring—despite new configurations of capital—the genealogical transfer of wealth and the fundamentally dispossessing relations of servitude.

    Let me shift a gear here ever so slightly and enter Alicia Florrick. Alicia is “The Good Wife”, who many of you know from the TV show of the same name. She is the white fantasy super-hero and upper middle class working mother and ruthless lawyer who has successfully exploded onto the job market after years of raising her children and who is not only capable of leaning in after all those years, but of taking command of her own law firm and running for political office. Alicia is a “good wife” not solely because she has stood beside her philandering politician husband, but because as a white, upper-class mother and lawyer, she is nonetheless responsible for the utmost of feminized and invisible labor—that of (re)producing the very conditions of sociality. Her “womanly” or “wife-ish” goodness is predicated on her ability to transform what are essentially, in the show, a series of shitty experiences and shitty conditions, into conditions of possibility and potential. Alicia works endlessly, tirelessly (Does she ever sleep?) to find new avenues of possibility and configurations of the law in order to create a very specific form of “liberal” order and organization, believing as she does in the “power of rules” (in distinction to her religious daughter, a necessary trope used to highlight the fundamentally “moral” underpinning of secular order.)

    While the show is incredibly popular, no doubt because viewers desire to identify with Alicia’s capacity for labor and domination, to me the show is less about a real or even possible human figure than it is about a “good wife” and the social function that such a wife plays. In Oikonomic logic, a good wife is essential to the maintenance of contract because she is what metabolizes the worlds of inner and outer, simultaneously managing the inner domestic world of care within while parsing or keeping distinct its contagion from the outer world of contract. That Alicia is white, heternormative, upper middle class, as well as upwardly mobile and legally powerful is essential to aligning her with the power of contract, yet her work is fundamentally that of parsing contagions to the system. Prison bodies and prison as a site of the “general population” haunt the show as though we are meant to forget that Alicia’s labor and its value are predicated on the existence of space beyond contract—a space of being removed from visibility. The figure of the good wife therefore not only operates as a shared boundary, but reproduces the distinctions between contractable relations and invisible, obligated labor or what I will call metabolization. Our increasing digitized, datafied, networked, and surveilled world is fully populated by such good wives. We call them interfaces. But they should also be seen as a proliferation of contracts, which are rewriting the nature of who and what may participate.

    I would like to argue that good wives—or interfaces—and their necessary shadow world of obligated labor are useful frameworks for understanding the paradox I mentioned when I first began: how and why has digital abundance been ushered on the heels of massive income inequality and political dispossession? In the logic of the Oikos, the good wife of the interface stands in both contradistinction and harmony with the metabolizing labor of the system she manages, which is comprised of those specifically removed from “the labor” relation— domestic workers, care workers, prisoner laborers—those who must be “present” yet without recognition. The interface stands in both contradistinction and harmony with the algorithm that is made to be present and made to adapt. I want to argue that the “marriage” of the proliferation of interfaces and with the ubiquitous, and adaptive computation of digital algorithms is an Oikonomic infrastructure. It is a proliferation of contracts meant to insure that the “contagion” of the algorithm, which I explore in a moment, remain “black boxed” or removed from visibility, while nonetheless ensuring that such contagious invisible work shore up the power of contract and its ability to redirect capital along genealogical lines. While Piketty doesn’t uses the language of the Oikos, we might read the arrival of his work as a confirmation that we are in a moment re-establishing such a “household logic”—an expansion of capital that comes with quite a new foundation of the transfer of wealth.

    While the good wife or interface is a boundary, which borrowing from Celia Lury, that marks a frame for the simultaneous capture and redeployment of data, it is the digital algorithm that undergirds or makes possible the interfaces’ ontological authority to “measure.” However, algorithms, if we follow Luciana Parisi are not simple executing a string of code, not simply providing the interface with a “measure” of an existing world. Rather, algorithms are, as Luciana Parisi writes in her work on contagious architecture, performing entities that are “not simply representations of data, but are occasions of experience insofar as they prehend information in their own way.” Here Parisi is ascribing to the algorithm a Whiteheadian ontology of process, which sees the algorithm as its own spatio-temporal entity capable of grasping, including, or excluding data. Prehension implies not so much a choice, but a relation of allure by which all entities (not only algorithms) call one another into being, or come into being as events or what Whitehead calls “occasions of experience.” For Parisi, via Whitehead, the algorithm is no longer simply a tool to accomplish a task, but an “actuality, defined by an automated prehension of data in the computational processing of probability.”

    greek wedding
    Wedding in Ancient Greece. image source

    Much like the good wife of the Greek household, who must manage and organize—but is nonetheless dependent on— the contagious (and therefore made to be invisible) domestic labor of servants and slave, the good wife of the interface manages and organizes the prehensive capacities of the algorithm, which are then misrecognized as simply “doing their job” or executing their code in a divine order of being. However, if we follow Parisi, prehension does not simply imply the direct “reproduction of that which is prehended”, rather prehension should be understood itself be understood as a “contagion.” Writing, “infinite amounts of data irreversibly enter and determine the function of algorithmic procedures. It follows that contagion describes the immanence of randomness in programming.” This contagion, for Parisi, means that “algorithmic prehensions are quantifications of infinite qualities that produce new qualities.” Rather than simply “doing their job”, as it were, algorithms are fundamentally generative. They are, for Parisi, producing not only new digital spaces, but also programmed architectural forms and urban infrastructures that “expose us to new mode of living, but new modes of thinking.” Algorithms are metabolizing a world of infinite and incomputable data that is then mistaken by the interfaces as a “measure” of that world—a measure that can not only stand in for contract, but can give rise to a proliferation of micro contracts that populate the circulations of sociality.

    Control then, if we can return to that idea, has come not simply about as an undulation or a demise of discipline, but through an architecture of metabolization and measure that has never disavowed the function of contract. It is, in fact, an architecture quite successful at re-writing the very terms of contract arrangements. Algorithmic architectures may no longer seek to maintain the walls of the household, but they are nonetheless in the rapid production of an Oikos all the same.


    _____

    Karen Gregory (@claudiakincaid) is the Title V Lecturer in Sociology in the Department of Interdisciplinary Arts and Sciences/Center for Worker Education at the City College of New York, where she is also the faculty head of City Lab. Her work explores the intersection of digital labor, affect, and contemporary spirituality, with an emphasis on the role of the laboring body. Karen is a founding member of CUNY Graduate Center’s Digital Labor Working Group and her writings have appeared in Women’s Studies Quarterly, Women and Performance, Visual Studies, Contexts, The New Inquiry, and Dis Magazine.

    Back to the essay