boundary 2

Tag: facebook

  • Richard Hill – Too Big to Be (Review of Wu, The Curse of Bigness: Antitrust in the New Gilded Age)

    Richard Hill – Too Big to Be (Review of Wu, The Curse of Bigness: Antitrust in the New Gilded Age)

    a review of Timothy Wu, The Curse of Bigness: Antitrust in the New Gilded Age (Random House/Columbia Global Reports, 2018)

    by Richard Hill

    ~

    Tim Wu’s brilliant new book analyses in detail one specific aspect and cause of the dominance of big companies in general and big tech companies in particular: the current unwillingness to modernize antitrust law to deal with concentration in the provision of key Internet services. Wu is a professor at Columbia Law School, and a contributing opinion writer for the New York Times. He is best known for his work on Net Neutrality theory. He is author of the books The Master Switch and The Attention Merchants, along with Network Neutrality, Broadband Discrimination, and other works. In 2013 he was named one of America’s 100 Most Influential Lawyers, and in 2017 he was named to the American Academy of Arts and Sciences.

    What are the consequences of allowing unrestricted growth of concentrated private power, and abandoning most curbs on anticompetitive conduct? As Wu masterfully reminds us:

    We have managed to recreate both the economics and politics of a century ago – the first Gilded Age – and remain in grave danger of repeating more of the signature errors of the twentieth century. As that era has taught us, extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership. Yet, as if blind to the greatest lessons of the last century, we are going down the same path. If we learned one thing from the Gilded Age, it should have been this: The road to fascism and dictatorship is paved with failures of economic policy to serve the needs of the general public. (14)

    While increasing concentration, and its negative effects on social equity, is a general phenomenon, it is particularly concerning for what regards the Internet: “Most visible in our daily lives is the great power of the tech platforms, especially Google, Facebook, and Amazon, who have gained extraordinary power over our lives. With this centralization of private power has come a renewed concentration of wealth, and a wide gap between the rich and poor” (15). These trends have very real political effects: “The concentration of wealth and power has helped transform and radicalize electoral politics. As in the Gilded Age, a disaffected and declining middle class has come to support radically anti-corporate and nationalist candidates, catering to a discontent that transcends party lines” (15). “What we must realize is that, once again, we face what Louis Brandeis called the ‘Curse of Bigness,’ which, as he warned, represents a profound threat to democracy itself. What else can one say about a time when we simply accept that industry will have far greater influence over elections and lawmaking than mere citizens?” (15). And, I would add, what have we come to when some advocate that corporations should have veto power over public policies that affect all of us?

    Surely it is, or should be, obvious that current extreme levels of concentration are not compatible with the premises of social and economic equity, free competition, or democracy. And that “the classic antidote to bigness – the antitrust and other antimonopoly laws – might be recovered and updated to face the challenges of our times” (16). Those who doubt these propositions should read Wu’s book carefully, because he shows that they are true. My only suggestion for improvement would be to add a more detailed explanation of how network effects interact with economies of scale to favour concentration in the ICT industry in general, and in telecommunications and the Internet in particular. But this topic is well explained in other works.

    As Wu points out, antitrust law must not be restricted (as it is at present in the USA) “to deal with one very narrow type of harm: higher prices to consumers” (17). On the contrary, “It needs better tools to assess new forms of market power, to assess macroeconomic arguments, and to take seriously the link between industrial concentration and political influence” (18). The same has been said by other scholars (e.g. here, here, here and here), by a newspaper, an advocacy group, a commission of the European Parliament, a group of European industries, a well-known academic, and even by a plutocrat who benefitted from the current regime.

    Do we have a choice? Can we continue to pretend that we don’t need to adapt antitrust law to rein in the excessive power of the Internet giants? No: “The alternative is not appealing. Over the twentieth century, nations that failed to control private power and attend to the economic needs of their citizens faced the rise of strongmen who promised their citizens a more immediate deliverance from economic woes” (18). (I would argue that any resemblance to the election of US President Trump, to the British vote to leave the European Union, and to the rise of so-called populist parties in several European countries [e.g. Hungary, Italy, Poland, Sweden] is not coincidental).

    Chapter One of Wu’s book, “The Monopolization Movement,” provides historical background, reminding us that from the late nineteenth through the early twentieth century, dominant, sector-specific monopolies emerged and were thought to be an appropriate way to structure economic activity. In the USA, in the early decades of the twentieth century, under the Trust Movement, essentially every area of major industrial activity was controlled or influenced by a single man (but not the same man for each area), e.g. Rockefeller and Morgan. “In the same way that Silicon Valley’s Peter Thiel today argues that monopoly ‘drives progress’ and that ‘competition is for losers,’ adherents to the Trust Movement thought Adam Smith’s fierce competition had no place in a modern, industrialized economy” (26). This system rapidly proved to be dysfunctional: “There was a new divide between the giant corporation and its workers, leading to strikes, violence, and a constant threat of class warfare” (30). Popular resistance mobilized in both Europe and the USA, and it led to the adoption of the first antitrust laws.

    Chapter Two, “The Right to Live, and Not Merely to Exist,” reminds us that US Supreme Court Justice Louis Brandeis “really cared about … the economic conditions under which life is lived, and the effects of the economy on one’s character and on the nation’s soul” (33). The chapter outlines Brandeis’ career and what motivated him to combat monopolies.

    In Chapter Three, “The Trustbuster,” Wu explains how the 1901 assassination of US President McKinley, a devout supporter of unrestricted laissez-faire capitalism (“let well enough alone”, reminiscent of today’s calls for government to “do not harm” through regulation, and to “don’t fix it if it isn’t broken”), resulted in a fundamental change in US economic policy, when Theodore Roosevelt succeeded him. Roosevelt’s “determination that the public was ruler over the corporation, and not vice versa, would make him the single most important advocate of a political antitrust law.” (47). He took on the great US monopolists of the time by enforcing the antitrust laws. “To Roosevelt, economic policy did not form an exception to popular rule, and he viewed the seizure of economic policy by Wall Street and trust management as a serious corruption of the democratic system. He also understood, as we should today, that ignoring economic misery and refusing to give the public what they wanted would drive a demand for more extreme solutions, like Marxist or anarchist revolution” (49). Subsequent US presidents and authorities continued to be “trust busters”, through the 1990s. At the time, it was understood that antitrust was not just an economic issue, but also a political issue: “power that controls the economy should be in the hands of elected representatives of the people, not in the hands of an industrial oligarchy” (54, citing Justice William Douglas). As we all know, “Increased industrial concentration predictably yields increased influence over political outcomes for corporations and business interests, as opposed to citizens or the public” (55). Wu goes on to explain why and how concentration exacerbates the influence of private companies on public policies and undermines democracy (that is, the rule of the people, by the people, for the people). And he outlines why and how Standard Oil was broken up (as opposed to becoming a government-regulated monopoly). The chapter then explains why very large companies might experience disecomonies of scale, that is, reduced efficiency. So very large companies compensate for their inefficiency by developing and exploiting “a different kind of advantages having less to do with efficiencies of operation, and more to do with its ability to wield economic and political power, by itself or conjunction with others. In other words, a firm may not actually become more efficient as it gets larger, but may become better at raising prices or keeping out competitors” (71). Wu explains how this is done in practice. The rest of this chapter summarizes the impact of the US presidential election of 1912 on US antitrust actions.

    Chapter Four, “Peak Antitrust and the Chicago School,” explains how, during the decades after World War II, strong antitrust laws were viewed as an essential component of democracy; and how the European Community (which later became the European Union) adopted antitrust laws modelled on those of the USA. However, in the mid-1960s, scholars at the University of Chicago (in particular Robert Bork) developed the theory that antitrust measures were meant only to protect consumer welfare, and thus no antitrust actions could be taken unless there was evidence that consumers were being harmed, that is, that a dominant company was raising prices. Harm to competitors or suppliers was no longer sufficient for antitrust enforcement. As Wu shows, this “was really laissez-faire reincarnated.”

    Chapter Five, “The Last of the Big Cases,” discusses two of the last really large US antitrust case. The first was breakup of the regulated de facto telephone monopoly, AT&T, which was initiated in 1974. The second was the case against Microsoft, which started in 1998 and ended in 2001 with a settlement that many consider to be a negative turning point in US antitrust enforcement. (A third big case, the 1969-1982 case against IBM, is discussed in Chapter Six.)

    Chapter Six, “Chicago Triumphant,” documents how the US Supreme Court adopted Bork’s “consumer welfare” theory of antitrust, leading to weak enforcement. As a consequence, “In the United States, there have been no trustbusting or ‘big cases’ for nearly twenty years: no cases targeting an industry-spanning monopolist or super-monopolist, seeking the goal of breakup” (110). Thus, “In a run that lasted some two decades, American industry reached levels of industry concentration arguably unseen since the original Trust era. A full 75 percent of industries witnessed increased concentration from the years 1997 to 2012” (115). Wu gives concrete examples: the old AT&T monopoly, which had been broken up, has reconstituted itself; there are only three large US airlines; there are three regional monopolies for cable TV; etc. But the greatest failure “was surely that which allowed the almost entirely uninhibited consolidation of the tech industry into a new class of monopolists” (118).

    Chapter Seven, “The Rise of the Tech Trusts,” explains how the Internet morphed from a very competitive environment into one dominated by large companies that buy up any threatening competitor. “When a dominant firm buys a nascent challenger, alarm bells are supposed to ring. Yet both American and European regulators found themselves unable to find anything wrong with the takeover [of Instagram by Facebook]” (122).

    The Conclusion, “A Neo-Brandeisian Agenda,” outlines Wu’s thoughts on how to address current issues regarding dominant market power. These include renewing the well known practice of reviewing mergers; opening up the merger review process to public comment; renewing the practice of bringing major antitrust actions against the biggest companies; breaking up the biggest monopolies, adopting the market investigation law and practices of the United Kingdom; recognizing that the goal of antitrust is not just to protect consumers against high prices, but also to protect competition per se, that is to protect competitors, suppliers, and democracy itself. “By providing checks on monopoly and limiting private concentration of economic power, the antitrust law can maintain and support a different economic structure than the one we have now. It can give humans a fighting chance against corporations, and free the political process from invisible government. But to turn the ship, as the leaders of the Progressive era did, will require an acute sensitivity to the dangers of the current path, the growing threats to the Constitutional order, and the potential of rebuilding a nation that actually lives up to its greatest ideals” (139).

    In other words, something is rotten in the state of the Internet: it has “collection and exploitation of personal data”; it has “recently been used to erode privacy and to increase the concentration of economic power, leading to increasing income inequalities”; it has led to “erosion of the press, leading to erosion of democracy.” These developments are due to the fact that “US policies that ostensibly promote the free flow of information around the world, the right of all people to connect to the Internet, and free speech, are in reality policies that have, by design, furthered the geo-economic and geo-political goals of the US, including its military goals, its imperialist tendencies, and the interests of large private companies”; and to the fact that “vibrant government institutions deliberately transferred power to US corporations in order to further US geo-economical and geo-political goals.”

    Wu’s call for action is not just opportune, but necessary and important; at the same time, it is not sufficient.

    _____

    Richard Hill is President of the Association for Proper internet Governance, and was formerly a senior official at the International Telecommunication Union (ITU). He has been involved in internet governance issues since the inception of the internet and is now an activist in that area, speaking, publishing, and contributing to discussions in various forums. Among other works he is the author of The New International Telecommunication Regulations and the Internet: A Commentary and Legislative History (Springer, 2014). He writes frequently about internet governance issues for The b2o Review Digital Studies magazine.

    Back to the essay

  • Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    Zachary Loeb — Who Moderates the Moderators? On the Facebook Files

    by Zachary Loeb

    ~

    Speculative fiction is littered with fantastical tales warning of the dangers that arise when things get, to put it amusingly, too big. A researcher loses control of their experiment! A giant lizard menaces a city! Massive computer networks decide to wipe out humanity! A horrifying blob metastasizes as it incorporates all that it touches into its gelatinous self!

    Such stories generally contain at least a faint hint of the absurd. Nevertheless, silly stories can still contain important lessons, and some of the morals that one can pull from such tales are: that big things keep getting bigger, that big things can be very dangerous, and that sometimes things that get very big very fast wind up doing a fair amount of damage as what appears to be controlled growth is revealed to actually be far from managed. It may not necessarily always be a case of too big, as in size, but speculative fiction features no shortage of tragic characters who accidentally unleash some form of horror upon an unsuspecting populace because things were too big for that sorry individual to control. The mad scientist has a sad corollary in the figure of the earnest scientist who wails “I meant well” while watching their creation slip free from their grasp.

    Granted, if you want to see such a tale of the dangers of things getting too big and the desperate attempts to maintain some sort of control you don’t need to go looking for speculative fiction.

    You can just look at Facebook.

    With its publication of The Facebook Files, The Guardian has pried back the smiling façade of Zuckerberg’s monster to reveal a creature that an overwhelmed staff is desperately trying to contain with less than clear insight into how best to keep things under control. Parsing through a host of presentations and guidelines that are given to Facebook’s mysterious legion of content moderators, The Facebook Files provides insight into how the company determines what is and is not permitted on the website. It’s a tale that is littered with details about the desperate attempt to screen things that are being uploaded at a furious rate, with moderators often only having a matter of seconds in which they can make a decision as to whether or not something is permitted. It is a set of leaks that are definitely worth considering, as they provide an exposé of the guidelines Facebook moderators use when considering whether things truly qualify as revenge porn, child abuse, animal abuse, self-harm, unacceptable violence, and more. At the very least, the Facebook Files are yet another reminder of the continuing validity of Erich Fromm’s wise observation:

    What we use is not ours simply because we use it. (Fromm 2001, 225)

    In considering the Facebook Files it is worthwhile to recognize that the moderators are special figures in this story – they are not really the villains. The people working as actual Facebook moderators are likely not the same people who truly developed these guidelines. In truth, they likely weren’t even consulted. Furthermore, the moderators are almost certainly not the high-profile Facebook executives espousing techno-utopian ideologies in front of packed auditoriums. To put it plainly, Mark Zuckerberg is not checking to see if the thousands of photos being uploaded every second fit within the guidelines. In other words, having a measure of sympathy for the Facebook moderators who spend their days judging a mountain of (often disturbing) content is not the same thing as having any sympathy for Facebook (the company) or for its figureheads. Furthermore, Facebook has already automated a fair amount of the moderating process, and it is more than likely that Facebook would love to be able to ditch all of its human moderators in favor of an algorithm. Given the rate at which it expects them to work it seems that Facebook already thinks of its moderators as being little more than cogs in its vast apparatus.

    That last part helps point to one of the reasons why the Facebook Files are so interesting – because they provide a very revealing glimpse of the type of morality that a machine might be programmed to follow. The Facebook Files – indeed the very idea of Facebook moderators – is a massive hammer that smashes to bits the idea that technological systems are somehow neutral, for it puts into clear relief the ways in which people are involved in shaping the moral guidelines to which the technological system adheres. The case of what is and is not allowed on Facebook is a story playing out in real time of a company (staffed by real live humans) trying to structure the morality of a technological space. Even once all of this moderating is turned over to an algorithm, these Files will serve as a reminder that the system is acting in accordance with a set of values and views that were programmed into it by people. And this whole tale of Facebook’s attempts to moderate sensitive/disturbing content points to the fact that morality can often be quite tricky. And the truth of the matter, as many a trained ethicist will attest, is that moral matters are often rather complex – which is a challenge for Facebook as algorithms tend to do better with “yes” and “no” than they do with matters that devolve into a lot of complex philosophical argumentation.

    Thus, while a blanket “zero nudity” policy might be crude, prudish, and simplistic – it still represents a fairly easy way to separate allowed content from forbidden content. Similarly, a “zero violence” policy runs the risk of hiding the consequences of violence, masking the gruesome realities of war, and covering up a lot of important history – but it makes it easy to say “no videos of killings or self-harm are allowed at all.” Likewise, a strong “absolutely no threats of any sort policy” would make it so that “someone shoot [specific political figure” and “let’s beat up people with fedoras” would both be banned. By trying to parse these things Facebook has placed its moderators in tricky territory – and the guidelines it provides them with are not necessarily the clearest. Had Facebook maintained a strict “black and white” version of what’s permitted and not permitted it could have avoided the swamp through which it is now trudging with mixed results. Again, it is fair to have some measure of sympathy for the moderators here – they did not set the rules, but they will certainly be blamed, shamed, and likely fired for any failures to adhere to the letter of Facebook’s confusing law.

    Part of the problem that Facebook has to contend with is clearly the matter of free speech. There are certainly some who will cry foul at any attempt by Facebook to moderate content – crying out that such things are censorship. While still others will scoff at the idea of free speech as applied to Facebook seeing as it is a corporate platform and therefore all speech that takes place on the site already exists in a controlled space. A person may live in a country where they have a government protected right to free speech – but Facebook has no such obligation to its users. There is nothing preventing Facebook from radically changing its policies about what is permissible. If Facebook decided tomorrow that no content related to, for example, cookies was to be permitted, it could make and enforce that decision. And the company could make that decision regarding things much less absurd than cookies – if Facebook wanted to ban any content related to a given protest movement it would be within its rights to do so (which is not to say that would be good, but to say that it would be possible). In short, if you use Facebook you use it in accordance with its rules, the company does not particularly care what you think. And if you run afoul of one of its moderators you may well find your account suspended – you can cry “free speech” but Facebook will retort with “you agreed to our terms of use, Facebook is a private online space.” Here, though, a person may try to fire back at Facebook that in the 21st century, to a large extent, social media platforms like Facebook have become a sort of new public square.

    And, yet again, that is part of the reason why this is all so tricky.

    Facebook clearly wants to be the new “public square” – it wants to be the space where people debate politics, where candidates have forums, and where activists organize. Yet it wants all of these “public” affairs to take place within its own enclosed “private” space. There is no real democratic control of Facebook, the company may try to train its moderators to respect various local norms but the people from those localities don’t get to have a voice in determining what is and isn’t acceptable. Facebook is trying desperately to have it all ways – it wants to be the key space of the public sphere while simultaneously pushing back against any attempts to regulate it or subject it to increased public oversight. As lackluster and problematic as the guidelines revealed by the Facebook Files are, they still demonstrate that Facebook is trying (with mixed results) to regulate itself so that it can avoid being subject to further regulation. Thus, free speech is both a sword and a shield for Facebook – it allows the company to hide from the accusations that the site is filled with misogyny and xenophobia behind the shield of “free speech” even as the site can pull out its massive terms of service agreement (updated frequently) to slash users with the blade that on the social network there is no free speech only Facebook speech. The speech that Facebook is most concerned with is its own, and it will say and do what it needs to say and do, to protect itself from constraints.

    Yet, to bring it back to the points with which this piece began, many of the issues that the Facebook Files reveal have a lot to do with scale. Sorting out the nuance of an image or a video can take longer than the paltry few seconds most moderators are able to allot to each image/video. And it further seems that some of the judgments that Facebook is asking its moderators to make have less to do with morality or policies than they have to do with huge questions regarding how the moderator can possibly know if something is in accordance with the policies or not. How does a moderator not based in a community really know if something is up to a community’s standard? Facebook is hardly some niche site with a small user base and devoted cadre of moderators committed to keeping the peace – its moderators are overworked members of the cybertariat (a term borrowed from Ursula Huws), the community they serve is Facebook not those from whence the users hail. Furthermore, some of the more permissive policies – such as allowing images of animal abuse – couched under the premise that they may help to alert the authorities seems like more of an excuse than an admission of responsibility. Facebook has grown quite large, and it continues to grow. What it is experiencing is not so much a case of “growing pains” as it is a case of the pains that are inflicted on a society when something is allowed to grow out of control. Every week it seems that Facebook becomes more and more of a monopoly – but there seems to be little chance that it will be broken up (and it is unclear what that would mean or look like).

    Facebook is the science project of the researcher which is always about to get too big and slip out of control, and the Facebook Files reveal the company’s frantic attempt to keep the beast from throwing off its shackles altogether. And the danger there, from Facebook’s stance, is that – as in all works where something gets too big and gets out of control – the point when it loses control is the point where governments step in to try to restore order. What that would look like in this case is quite unclear, and while the point is not to romanticize regulation the Facebook Files help raise the question of who is currently doing the regulating and how are they doing it? That Facebook is having such a hard time moderating content on the site is actually a pretty convincing argument that when a site gets too big, the task of carefully moderating things becomes nearly impossible.

    To deny that Facebook has significant power and influence is to deny reality. While it’s true that Facebook can only set the policy for the fiefdoms it controls, it is worth recognizing that many people spend a heck of a lot of time ensconced within those fiefdoms. The Facebook Files are not exactly a shocking revelation showing a company that desperately needs some serious societal oversight – rather what is shocking about them is that they reveal that Facebook has been allowed to become so big and so powerful without any serious societal oversight. The Guardian’s article leading into the Facebook Files quotes Monika Bickert, ‎Facebook’s head of global policy management, as saying that Facebook is:

    “not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used.”

    But a question lingers as to whether or not these policies are really reflective of responsibility in any meaningful sense. Facebook may not be a “traditional” company in many respects, but one area in which it remains quite hitched to tradition is in holding to a value system where what matters most is the preservation of the corporate brand. To put it slightly differently, there are few things more “traditional” than the monopolistic vision of total technological control reified in Facebook’s every move. In his classic work on the politics of technology, The Whale and the Reactor, Langdon Winner emphasized the need to seriously consider the type of world that technological systems were helping to construct. As he put it:

    We should try to imagine and seek to build technical regimes compatible with freedom, social justice, and other key political ends…If it is clear that the social contract implicitly created by implementing a particular generic variety of technology is incompatible with the kind of society we deliberately choose—that is, if we are confronted with an inherently political technology of an unfriendly sort—then that kind of device or system ought to be excluded from society altogether. (Winner 1989, 55)

    The Facebook Files reveal the type of world that Facebook is working tirelessly to build. It is a world where Facebook is even larger and even more powerful – a world in which Facebook sets the rules and regulations. In which Facebook says “trust us” and people are expected to obediently go along.

    Yes, Facebook needs content moderators, but it also seems that it is long-past due for there to be people who moderate Facebook. And those people should not be cogs in the Facebook machine.

    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, an MA from the Media, Culture, and Communications department at NYU, and is currently working towards a PhD in the History and Sociology of Science department at the University of Pennsylvania. His research areas include media refusal and resistance to technology, ideologies that develop in response to technological change, and the ways in which technology factors into ethical philosophy – particularly in regards of the way in which Jewish philosophers have written about ethics and technology. Using the moniker “The Luddbrarian,” Loeb writes at the blog Librarian Shipwreck, where an earlier version of this post first appeared, and is a frequent contributor to The b2 Review Digital Studies section.

    Back to the essay
    _____

    Works Cited

    • Fromm, Erich. 2001. The Fear of Freedom. London: Routledge Classics.
    • Winner, Langdon. 1989. The Whale and the Reactor. Chicago: The University of Chicago Press.
  • Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    Audrey Watters – Public Education Is Not Responsible for Tech’s Diversity Problem

    By Audrey Watters

    ~

    On July 14, Facebook released its latest “diversity report,” claiming that it has “shown progress” in hiring a more diverse staff. Roughly 90% of its US employees are white or Asian; 83% of those in technical positions at the company are men. (That’s about a 1% improvement from last year’s stats.) Black people still make up just 2% of the workforce at Facebook, and 1% of the technical staff. Those are the same percentages as 2015, when Facebook boasted that it had hired 7 Black people. “Progress.”

    In this year’s report, Facebook blamed the public education system its inability to hire more people of color. I mean, whose fault could it be?! Surely not Facebook’s! To address its diversity problems, Facebook said it would give $15 million to Code.org in order to expand CS education, news that was dutifully reported by the ed-tech press without any skepticism about Facebook’s claims about its hiring practices or about the availability of diverse tech talent.

    The “pipeline” problem, writes Dare Obasanjo, is a “big lie.” “The reality is that tech companies shape the ethnic make up of their employees based on what schools & cities they choose to hire from and where they locate engineering offices.” There is diverse technical talent, ready to be hired; the tech sector, blinded by white, male privilege, does not recognize it, does not see it. See the hashtag #FBNoExcuses which features more smart POC in tech than work at Facebook and Twitter combined, I bet.

    Facebook’s decision to “blame schools” is pretty familiar schtick by now, I suppose, but it’s still fairly noteworthy coming from a company whose founder and CEO is increasingly active in ed-tech investing. More broadly, Silicon Valley continues to try to shape the future of education – mostly by defining that future as an “engineering” or “platform” problem and then selling schools and parents and students a product in return. As the tech industry utterly fails to address diversity within its own ranks, what can we expect from its vision for ed-tech?!

    My fear: ed-tech will ignore inequalities. Ed-tech will expand inequalities. Ed-tech will, as Edsurge demonstrated this week, simply co-opt the words of people of color in order to continue to sell its products to schools. (José Vilson has more to say about this particular appropriation in this week’s #educolor newsletter.)

    And/or: ed-tech will, as I argued this week in the keynote I delivered at the Digital Pedagogy Institute in PEI, confuse consumption with “innovation.” “Gotta catch ’em all” may be the perfect slogan for consumer capitalism; but it’s hardly a mantra I’m comfortable chanting to push for education transformation. You cannot buy your way to progress.

    All of the “Pokémon GO will revolutionize education” claims have made me incredibly angry, even though it’s a claim that’s made about every single new product that ed-tech’s early adopters find exciting (and clickbait-worthy). I realize there are many folks who seem to find a great deal of enjoyment in the mobile game. Hoorah. But there are some significant issues with the game’s security, privacy, its Terms of Service, its business model, and its crowd-sourced data model – a data model that reflects the demographics of those who played an early version of the game and one that means that there are far fewer “pokestops” in Black neighborhoods. All this matters for Pokémon GO; all this matters for ed-tech.

    Pokémon GO.
    Pokémon GO

    Pokémon GO is just the latest example of digital redlining, re-inscribing racist material policies and practices into new, digital spaces. So when ed-tech leaders suggest that we shouldn’t criticize Pokémon GO, I despair. I really do. Who is served by being silent!? Who is served by enforced enthusiasm? How does ed-tech, which has its own problems with diversity, serve to re-inscribe racist policies and practices because its loudest proponents have little interest in examining their own privileges, unless, as José points out, it gets them clicks?

    Sigh.
    _____

    Audrey Watters is a writer who focuses on education technology – the relationship between politics, pedagogy, business, culture, and ed-tech. She has worked in the education field for over 15 years: teaching, researching, organizing, and project-managing. Although she was two chapters into her dissertation (on a topic completely unrelated to ed-tech), she decided to abandon academia, and she now happily fulfills the one job recommended to her by a junior high aptitude test: freelance writer. Her stories have appeared on NPR/KQED’s education technology blog MindShift, in the data section of O’Reilly Radar, on Inside Higher Ed, in The School Library Journal, in The Atlantic, on ReadWriteWeb, and Edutopia. She is the author of the recent book The Monsters of Education Technology (Smashwords, 2014) and working on a book called Teaching Machines. She maintains the widely-read Hack Education blog, and writes frequently for The b2 Review Digital Studies magazine on digital technology and education.

    Back to the essay

  • The Man Who Loved His Laptop

    The Man Who Loved His Laptop

    Her (2013)a review of Spike Jonze (dir.), Her (2013)
    by Mike Bulajewski
    ~
    I’m told by my sister, who is married to a French man, that the French don’t say “I love you”—or at least they don’t say it often. Perhaps they think the words are superfluous and it’s the behavior of the person you are in a relationship with tells you everything. Americans, on the other hand, say it to everyone—lovers, spouses, friends, parents, grandparents, children, pets—and as often as possible, as if quantity matters most. The declaration is also an event. For two people beginning a relationship, it marks a turning point and a new stage in the relationship.

    If you aren’t American, you may not have realized that relationships have stages. In America, they do. It’s complicated. First there are the three main thresholds of commitment: Dating, Exclusive Dating, then of course Marriage. There are three lesser pre-Dating stages: Just Talking, Hooking Up and Friends with Benefits; and one minor stage between Dating and Exclusive called Pretty Much Exclusive. Within Dating, there are several minor substages: number of dates (often counted up to the third date) and increments of physical intimacy denoted according to the well-known baseball metaphor of first, second, third and home base.

    There are also a number of rituals that indicate progress: updating of Facebook relationship statuses; leaving a toothbrush at each other’s houses; the aforementioned exchange of I-love-you’s; taking a vacation together; meeting the parents; exchange of house keys; and so on. When people, especially unmarried people talk about relationships, often the first questions are about these stages and rituals. In France the system is apparently much less codified. One convention not present in the United States is that romantic interest is signaled when a man invites a woman to go for a walk with him.

    The point is two-fold: first, although Americans admire and often think of French culture as holding up a standard for what romance ought to be, Americans act nothing like the French in relationships and in fact know very little about how they work in France. Second and more importantly, in American culture love is widely understood as spontaneous and unpredictable, and yet there is also an opposite and often unacknowledged expectation that relationships follow well-defined rules and rituals.

    This contradiction might explain the great public clamor over romance apps like Romantimatic and BroApp that automatically send your significant other romantic messages, either predefined or your own creation, at regular intervals—what philosopher of technology Evan Selinger calls (and not without justification) apps that outsource our humanity.

    Reviewers of these apps were unanimous in their disapproval, disagreeing only on where to locate them on a spectrum between pretty bad and sociopathic. Among all the labor-saving apps and devices, why should this one in particular be singled out for opprobrium?

    Perhaps one reason for the outcry is that they expose an uncomfortable truth about how easily romance can be automated. Something we believe is so intimate is revealed as routine and predictable. What does it say about our relationship needs that the right time to send a loving message to your significant other can be reduced to an algorithm?

    The routinization of American relationships first struck me in the context of this little-known fact about how seldom French people say “I love you.” If you had to launch one of these romance apps in France, it wouldn’t be enough to just translate the prewritten phrases into French. You’d have to research French romantic relationships and discover what are the most common phrases—if there are any—and how frequently text messages are used for this purpose. It’s possible that French people are too unpredictable, or never use text messages for romantic purposes, so the app is just not feasible in France.

    Romance is culturally determined. That American romance can be so easily automated reveals how standardized and even scheduled relationships already are. Selinger’s argument that automated romance undermines our humanity has some merit, but why stop with apps? Why not address the problem at a more fundamental level and critique the standardized courtship system that regulates romance. Doesn’t this also outsource our humanity?

    The best-selling relationship advice book The 5 Love Languages claims that everyone understands one of five love “languages” and the key to a happy relationship for each partner to learn to express love in the correct language. Should we be surprised if the more technically minded among us concludes that the problem of love can be solved with technology? Why not try to determine the precise syntax and semantics of these love languages, and attempt to express them rigorously and unambiguously in the same way that computer languages and communications protocols are? Can love be reduced to grammar?

    Spike Jonze’s Her (2013) tells the story of Theodore Twombly, a soon-to-be divorced writer who falls in love with Samantha, an AI operating system who far exceeds the abilities of today’s natural language assistants like Apple’s Siri or Microsoft’s Cortana. Samantha is not only hyper-intelligent, she’s also capable of laughter, telling jokes, picking up on subtle unspoken interpersonal cues, feeling and communicating her own emotions, and so on. Theodore falls in love with her, but there is no sense that their relationship is deficient because she’s not human. She is as emotionally expressive as any human partner, at least on film.

    Theodore works for a company called BeautifulHandwrittenLetters.com as a professional Cyrano de Bergerac (or perhaps a human Romantimatic), ghostwriting heartfelt “handwritten” letters on behalf of this clients. It’s an ironic twist: Samantha is his simulated girlfriend, a role which he himself adopts at work by simulating the feelings of his clients. The film opens with Theodore at his desk at work, narrating a letter from a wife to her husband on the occasion of their 50th wedding anniversary. He is a master of the conventions of the love letter. Later in the film, his work is discovered by a literary agent, and he gets an offer to have book published of his best work.

    [youtube https://www.youtube.com/watch?v=CxahbnUCZxY&w=560&h=315]

    But for all his (alleged) expertise as a romantic writer, Theodore is lonely, emotionally stunted, ambivalent towards the women in his life, and—at least before meeting Samantha—apparently incapable of maintaining relationships since he separated from his ex-wife Catherine. Highly sensitive, he is disturbed by encounters with women that go off the script: a phone sex encounter goes awry when the woman demands that he enact her bizarre fantasy of being choked with a dead cat; and on a date with a woman one night, she exposes a little too much vulnerability and drunkenly expresses her fear that he won’t call her. He abruptly and awkwardly ends the date.

    Theodore wanders aimlessly through the high tech city as if it is empty. With headphones always on, he’s withdrawn, cocooned in a private sonic bubble. He interacts with his device through voice, asking it to play melancholy songs and skipping angry messages from his attorney demanding that he sign the divorce papers already. At times, he daydreams of happier times when he and his ex-wife were together and tells Samantha how much he liked being married. At first it seems that Catherine left him. We wonder if he withdrew from the pain of his heartbreak. But soon a different picture emerges. When they finally meet to sign the divorce papers over lunch, Catherine accuses him of not being able to handle her emotions and reveals that he tried to get her on Prozac. She says to him “I always felt like you wished I could just be a happy, light, everything’s great, bouncy L.A. wife. But that’s not me.”

    So Theodore’s avoidance of real challenges and emotions in relationships turns out to be an ongoing problem—the cause, not the consequence, of his divorce. Starting a relationship with his operating systems Samantha is his latest retreat from reality—not from physical reality, but from the virtual reality of authentic intersubjective contact.

    Unlike his other relationships, Samantha is perfectly customized to his needs. She speaks his “love language.” Today we personalize our operating system and fill out online dating profile specifying exactly what kind of person we’re looking for. When Theodore installs Samantha on his computer for the first time, the two operations are combined with a single question. The system asks him how he would describe his relationship with his mother. He begins to reply with psychological banalities about how she is insufficiently attuned to his needs, and it quickly stops him, already knowing what he’s about. And so do we.

    That Theodore is selfish doesn’t mean that he is unfeeling, unkind, insensitive, conceited or uninterested in his new partners thoughts, feelings and goals. His selfishness is the kind that’s approved and even encouraged today, the ethically consistent selfishness that respects the right of others to be equally selfish. What he wants most of all is to be comfortable, to feel good, and that requires a partner who speaks his love language and nothing else, someone who says nothing that would veer off-script and reveal too many disturbing details. More precisely, Theodore wants someone who speaks what Lacan called empty speech: speech that obstructs the revelation of the subject’s traumatic desire.

    Objectification is a traditional problem between men and women. Men reduce women to mere bodies or body parts that exist only for sexual gratification, treating them as sex objects rather than people. The dichotomy is between the physical as the domain of materiality, animality and sex on one hand, and the spiritual realm of subjectivity, personality, agency and the soul on the other. If objectification eliminates the soul, then Theodore engages in something like the opposite, a subjectification which eradicates the body. Samantha is just a personality.

    Technology writer Nicholas Carr‘s new book The Glass Cage: Automation and Us (Norton, 2014) investigates the ways that automation and artificial intelligence dull our cognitive capacities. Her can be read as a speculative treatment of the same idea as it relates to emotion. What if the difficulty of relationships could be automated away? The film’s brilliant provocation is that it shows us a lonely, hollow world mediated through technology but nonetheless awash in sentimentality. It thwarts our expectations that algorithmically-generated emotion would be as stilted and artificial as today’s speech synthesizers. Samantha’s voice is warm, soulful, relatable and expressive. She’s real, and the feelings she triggers in Theodore are real.

    But real feelings with real sensations can also be shallow. As Maria Bustillo notes, Theodore is an awful writer, at least by today’s standards. Here’s the kind of prose that wins him accolades from everyone around him:

    I remember when I first started to fall in love with you like it was last night. Lying naked beside you in that tiny apartment, it suddenly hit me that I was part of this whole larger thing, just like our parents, and our parents’ parents. Before that I was just living my life like I knew everything, and suddenly this bright light hit me and woke me up. That light was you.

    In spite of this, we’re led to believe that Theodore is some kind of literary genius. Various people in his life compliment him on his skill and the editor of the publishing company who wants to publish his work emails to tell him how moved he and his wife were when they read them. What kind of society would treat such pedestrian writing as unusual, profound or impressive? And what is the average person’s writing like if Theodore’s services are worth paying for?

    Recall the cult favorite Idiocracy (2006) directed by Mike Judge, a science fiction satire set in a futuristic dystopia where anti-intellectualism is rampant and society has descended into stupidity. We can’t help but conclude that Her offers a glimpse into a society that has undergone a similar devolution into both emotional and literary idiocy.

    _____

    Mike Bulajewski (@mrteacup) is a user experience designer with a Master’s degree from University of Washington’s Human Centered Design and Engineering program. He writes about technology, psychoanalysis, philosophy, design, ideology & Slavoj Žižek at MrTeacup.org, where an earlier version of this review first appeared.

    Back to the essay

  • All Hitherto Existing Social Media

    All Hitherto Existing Social Media

    Social Media: A Critical Introduction (Sage, 2013)a review of Christian Fuchs, Social Media: A Critical Introduction
    by Zachary Loeb
    ~
    Legion are the books and articles describing the social media that has come before. Yet the tracts focusing on Friendster, LiveJournal, or MySpace now appear as throwbacks, nostalgically immortalizing the internet that was and is now gone. On the cusp of the next great amoeba-like expansion of the internet (wearable technology and the “internet of things”) it is a challenging task to analyze social media as a concept while recognizing that the platforms being focused upon—regardless of how permanent they seem—may go the way of Friendster by the end of the month. Granted, social media (and the companies whose monikers act as convenient shorthand for it) is an important topic today. Those living in highly digitized societies can hardly avoid the tendrils of social media (even if a person does not use a particular platform it may still be tracking them), but this does not mean that any of us fully understand these platforms, let alone have a critical conception of them. It is into this confused and confusing territory that Christian Fuchs steps with his Social Media: A Critical Introduction.

    It is a book ostensibly targeted at students. Though when it comes to social media—as Fuchs makes clear—everybody has quite a bit to learn.

    By deploying an analysis couched in Marxist and Critical Theory, Fuchs aims not simply to describe social media as it appears today, but to consider its hidden functions and biases, and along the way to describe what social media could become. The goal of Fuchs’s book is to provide readers—the target audience is students, after all—with the critical tools and proper questions with which to approach social media. While Fuchs devotes much of the book to discussing specific platforms (Google, Facebook, Twitter, WikiLeaks, Wikipedia), these case studies are used to establish a larger theoretical framework which can be applied to social media beyond these examples. Affirming the continued usefulness of Marxist and Frankfurt School critiques, Fuchs defines the aim of his text as being “to engage with the different forms of sociality on the internet in the context of society” (6) and emphasizes that the “critical” questions to be asked are those that “are concerned with questions of power” (7).

    Thus a critical analysis of social media demands a careful accounting of the power structures involved not just in specific platforms, but in the larger society as a whole. So though Fuchs regularly returns to the examples of the Arab Spring and the Occupy Movement, he emphasizes that the narratives that dub these “Twitter revolutions” often come from a rather non-critical and generally pro-capitalist perspective that fail to embed adequately uses of digital technology in their larger contexts.

    Social media is portrayed as an example, like other media, of “techno-social systems” (37) wherein the online platforms may receive the most attention but where the, oft-ignored, layer of material technologies is equally important. Social media, in Fuchs’s estimation, developed and expanded with the growth of “Web 2.0” and functions as part of the rebranding effort that revitalized (made safe for investments) the internet after the initial dot.com bubble. As Fuchs puts it, “the talk about novelty was aimed at attracting novel capital investments” (33). What makes social media a topic of such interest—and invested with so much hope and dread—is the degree to which social media users are considered as active creators instead of simply consumers of this content (Fuchs follows much recent scholarship and industry marketing in using the term “prosumers” to describe this phenomenon; the term originates from the 1970s business-friendly futurology of Alvin Toffler’s The Third Wave). Social media, in Fuchs’s description, represents a shift in the way that value is generated through labor, and as a result an alteration in the way that large capitalist firms appropriate surplus value from workers. The social media user is not laboring in a factory, but with every tap of the button they are performing work from which value (and profit) is skimmed.

    Without disavowing the hope that social media (and by extension the internet) has liberating potential, Fuchs emphasizes that such hopes often function as a way of hiding profit motives and capitalist ideologies. It is not that social media cannot potentially lead to “participatory democracy” but that “participatory culture” does not necessarily have much to do with democracy. Indeed, as Fuchs humorously notes: “participatory culture is a rather harmless concept mainly created by white boys with toys who love their toys” (58). This “love their toys” sentiment is part of the ideology that undergirds much of the optimism around social media—which allows for complex political occurrences (such as the Arab Spring) to be reduced to events that can be credited to software platforms.

    What Fuchs demonstrates at multiple junctures is the importance of recognizing that the usage of a given communication tool by a social movement does not mean that this tool brought about the movement: intersecting social, political and economic factors are the causes of social movements. In seeking to provide a “critical introduction” to social media, Fuchs rejects arguments that he sees as not suitably critical (including those of Henry Jenkins and Manuel Castells), arguments that at best have been insufficient and at worst have been advertisements masquerading as scholarship.

    Though the time people spend on social media is often portrayed as “fun” or “creative,” Fuchs recasts these tasks as work in order to demonstrate how that time is exploited by the owners of social media platforms. By clicking on links, writing comments, performing web searches, sending tweets, uploading videos, and posting on Facebook, social media users are performing unpaid labor that generates a product (in the form of information about users) that can then be sold to advertisers and data aggregators; this sale generates profits for the platform owner which do not accrue back to the original user. Though social media users are granted “free” access to a service, it is their labor on that platform that makes the platform have any value—Facebook and Twitter would not have a commodity to sell to advertisers if they did not have millions of users working for them for free. As Fuchs describes it, “the outsourcing of work to consumers is a general tendency of contemporary capitalism” (111).

    screen shot of Karl Marx Community Facebook Page
    screen shot of a Karl Marx Community Page on Facebook

    While miners of raw materials and workers in assembly plants are still brutally exploited—and this unseen exploitation forms a critical part of the economic base of computer technology—the exploitation of social media users is given a gloss of “fun” and “creativity.” Fuchs does not suggest that social media use is fully akin to working in a factory, but that users carry the factory with them at all times (a smart phone, for example) and are creating surplus value as long as they are interacting with social media. Instead of being a post-work utopia, Fuchs emphasizes that “the existence of the internet in its current dominant capitalist form is based on various forms of labour” (121) and the enrichment of internet firms is reliant upon the exploitation of those various forms of labor—central amongst these being the social media user.

    Fuchs considers five specific platforms in detail so as to illustrate not simply the current state of affairs but also to point towards possible alternatives. Fuchs analyzes Google, Facebook, Twitter, WikiLeaks and Wikipedia as case studies of trends to encourage and trends of which to take wary notice. In his analysis of the three corporate platforms (Google, Facebook and Twitter) Fuchs emphasizes the ways in which these social media companies (and the moguls who run them) have become wealthy and powerful by extracting value from the labor of users and by subjecting users to constant surveillance. The corporate platforms give Fuchs the opportunity to consider various social media issues in sharper relief: labor and monopolization in terms of Google, surveillance and privacy issues with Facebook, the potential for an online public sphere and Twitter. Despite his criticisms, Fuchs does not dismiss the value and utility of what these platforms offer, as is captured in his claim that “Google is at the same time the best and the worst thing that has ever happened on the internet” (147). The corporate platforms’ successes are owed at least partly to their delivering desirable functions to users. The corrective for which Fuchs argues is increased democratic control of these platforms—for the labor to be compensated and for privacy to pertain to individual humans instead of to businesses’ proprietary methods of control. Indeed, one cannot get far with a “participatory culture” unless there is a similarly robust “participatory democracy,” and part of Fuchs’s goal is to show that these are not at all the same.

    WikiLeaks and Wikipedia both serve as real examples that demonstrate the potential of an “alternative” internet for Fuchs. Though these Wiki platforms are not ideal they contain within themselves the seeds for their own adaptive development (“WikiLeaks is its own alternative”—232), and serve for Fuchs as proof that the internet can move in a direction akin to a “commons.” As Fuchs puts it, “the primary political task for concerned citizens should therefore be to resist the commodification of everything and to strive for democratizing the economy and the internet” (248), a goal he sees as at least partly realized in Wikipedia.

    While the outlines of the internet’s future may seem to have been written already, Fuchs’s book is an argument in favor of the view that the code can still be altered. A different future relies upon confronting the reality of the online world as it currently is and recognizing that the battles waged for control of the internet are proxy battles in the conflict between capitalism and an alternative approach. In the conclusion of the book Fuchs eloquently condenses his view and the argument that follows from it in two simple sentences: “A just society is a classless society. A just internet is a classless internet” (257). It is a sentiment likely to spark an invigorating discussion, be it in a classroom, at a kitchen table, or in a café.

    * * *

    While Social Media: A Critical Introduction is clearly intended as a text book (each chapter ends with a “recommended readings and exercises” section), it is written in an impassioned and engaging style that will appeal to anyone who would like to see a critical gaze turned towards social media. Fuchs structures his book so that his arguments will remain relevant even if some of the platforms about which he writes vanish. Even the chapters in which Fuchs focuses on a specific platform are filled with larger arguments that transcend that platform. Indeed one of the primary strengths of Social Media is that Fuchs skillfully uses the familiar examples of social media platforms as a way of introducing the reader to complex theories and thinkers (from Marx to Habermas).

    Whereas Fuchs accuses some other scholars of subtly hiding their ideological agendas, no such argument can be made regarding Fuchs himself. Social Media is a Marxist critique of the major online platforms—not simply because Fuchs deploys Marx (and other Marxist theorists) to construct his arguments, but because of his assumption that the desirable alternative for the internet is part and parcel of a desirable alternative to capitalism. Such a sentiment can be found at several points throughout the book, but is made particularly evident by lines such as these from the book’s conclusion: “There seem to be only two options today: (a) continuance and intensification of the 200-year-old barbarity of capitalism or (b) socialism” (259)—it is a rather stark choice. It is precisely due to Fuchs’s willingness to stake out, and stick to, such political positions that this text is so effective.

    And yet, it is the very allegiance to such positions that also presents something of a problem. While much has been written of late—in the popular press in addition to by scholars—regarding issues of privacy and surveillance, Fuchs’s arguments about the need to consider users as exploited workers will likely strike many readers as new, and thus worthwhile in their novelty if nothing else. Granted, to fully go along with Fuchs’s critique requires readers to already be in agreement or at least relatively sympathetic with Fuchs political and ethical positions. This is particularly true as Fuchs excels at making an argument about media and technology, but devotes significantly fewer pages to ethical argumentation.

    The lines (quoted earlier) “A just society is a classless society. A just internet is a classless internet” (257) serve as much as a provocation as a conclusion. For those who ascribe to a similar notion of “a just society” Fuchs book will likely function as an important guide to thinking about the internet; however, to those whose vision of “a just society” is fundamentally different from his, Fuchs’s book may be less than convincing. Social Media does not present a complete argument about how one defines a “just society.” Indeed, the danger may be that Fuchs’s statements in praise of a “classless society” may lead to some dismissing his arguments regarding the way in which the internet has replicated a “class society.” Likewise, it is easy to imagine a retort being offered that the new platforms of “the sharing economy” represent the birth of this “classless society” (though it is easy to imagine Fuchs pointing out, as have other critics from the left, that the “sharing economy” is simply more advertising lingo being used to hide the same old capitalist relations). This represents something of a peculiar challenge when it comes to Social Media, as the political commitment of the book is simultaneously what makes it so effective and that which threatens the book’s potential political efficacy.

    Thus Social Media presents something of a conundrum: how effective is a critical introduction if its conclusion offers a heads-and-tails choice between “barbarity of capitalism or…socialism”? Such a choice feels slightly as though Fuchs is begging the question. While it is curious that Fuchs does not draw upon critical theorists’ writings about the culture industry, the main issues with Social Media seem to be reflections of this black-and-white choice. Thus it is something of a missed chance that Fuchs does not draw upon some of the more serious critics of technology (such as Ellul or Mumford)—whose hard edged skepticism would nevertheless likely not accept Fuchs’s Marxist orientation. Such thinkers might provide a very different perspective on the choice between “capitalism” and “socialism”—arguing that “technique” or “the megamachine” can function quite effectively in either. Though Fuchs draws heavily upon thinkers in the Marxist tradition it may be that another set of insights and critiques might have been gained by bringing in other critics of technology (Hans Jonas, Peter Kropotkin, Albert Borgmann)—especially as some of these thinkers had warned that Marxism may overvalue the technological as much as capitalism does. This is not to argue in favor of any of these particular theorists, but to suggest that Fuchs’s claims would have been strengthened by devoting more time to considering the views of those who were critical of technology, capitalism and of Marxism. Social Media does an excellent job of confronting the ideological forces on its right flank; it could have benefited from at least acknowledging the critics to its left.

    Two other areas that remain somewhat troubling are in regards to Fuchs’s treatment of Wiki platforms and of the materiality of technology. The optimism with which Fuchs approaches WikiLeaks and Wikipedia is understandable given the dourness with which he approaches the corporate platforms, and yet his hopes for them seem somewhat exaggerated. Fuchs claims “Wikipedians are prototypical contemporary communists” (243), partially to suggest that many people are already engaged in commons based online activities and yet it is an argument that he simultaneously undermines by admitting (importantly) the fact that Wikipedia’s editor base is hardly representative of all of the platform’s users (it’s back to the “white boys with toys who love their toys”), and some have alleged that putatively structureless models of organization like Wikipedia’s actually encourage oligarchical forms of order. Which is itself not to say anything about the role that editing “bots” play on the platform or the degree to which Wikipedia is reliant upon corporate platforms (like Google) for promotion. Similarly, without ignoring its value, the example of WikiLeaks seems odd at a moment when the organization seems primarily engaged in a rearguard self-defense whilst the leaks that have generated the most interest of late has been made to journalists at traditional news sources (Edward Snowden’s leaks to Glenn Greenwald, who was writing for The Guardian when the leaks began).

    The further challenge—and this is one that Fuchs is not alone in contending with—is the trouble posed by the materiality of technology. An important aspect of Social Media is that Fuchs considers the often-unseen exploitation and repression upon which the internet relies: miners, laborers who build devices, those who recycle or live among toxic e-waste. Yet these workers seem to disappear from the arguments in the later part of the book, which in turn raises the following question: even if every social media platform were to be transformed into a non-profit commons-based platform that resists surveillance, manipulation, and the exploitation of its users, is such a platform genuinely just if to use it one must rely on devices whose minerals were mined in warzones, assembled in sweatshops, and which will eventually go to an early grave in a toxic dump? What good is a “classless (digital) society” without a “classless world?” Perhaps the question of a “capitalist internet” is itself a distraction from the fact that the “capitalist internet” is what one gets from capitalist technology. Granted, given Fuchs’s larger argument it may be fair to infer that he would portray “capitalist technology” as part of the problem. Yet, if the statement “a just society is a classless society” is to be genuinely meaningful than this must extend not just to those who use a social media platform but to all of those involved from the miner to the manufacturer to the programmer to the user to the recycler. To pose the matter as a question, can there be participatory (digital) democracy that relies on serious exploitation of labor and resources?

    Social Media: A Critical Introduction provides exactly what its title promises—a critical introduction. Fuchs has constructed an engaging and interesting text that shows the continuing validity of older theories and skillfully demonstrates the way in which the seeming newness of the internet is itself simply a new face on an old system. While Fuchs has constructed an argument that resolutely holds its position it is from a stance that one does not encounter often enough in debates around social media and which will provide readers with a range of new questions with which to wrestle.

    It remains unclear in what ways social media will develop in the future, but Christian Fuchs’s book will be an important tool for interpreting these changes—even if what is in store is more “barbarity.”
    _____

    Zachary Loeb is a writer, activist, librarian, and terrible accordion player. He earned his MSIS from the University of Texas at Austin, and is currently working towards an MA in the Media, Culture, and Communications department at NYU. His research areas include media refusal and resistance to technology, ethical implications of technology, alternative forms of technology, and libraries as models of resistance. Using the moniker “The Luddbrarian” Loeb writes at the blog librarianshipwreck. He previously reviewed The People’s Platform by Astra Taylor for boundary2.org.
    Back to the essay