Adam Dean–Toll Roads and Gated Communities: How Private Commerce Took Over the Public Internet

This text is published as part of a special b2o issue titled “Critique as Care”, edited by Norberto Gomez and Jonathan Nichols, and published in honor of our b2o and b2 colleague and friend, the late David Golumbia.

Toll Roads and Gated Communities:
How Private Commerce Took Over the Public Internet

Adam Dean

“The kind of environment that we developed Google in, the reason that we were able to develop a search engine, is the web was so open. Once you get too many rules, that will stifle innovation” (Katz 2012).

When the Internet went private with the High Performance Computing Act of 1991, it was the metaphorical Wild West—the land-grab decade, where unknown companies popped up to claim untapped real estate in the form of domain names, and prospecting users invented pathways to share unlimited unlegislated copyrighted material. It was in this period that long-established telecommunications companies grew their existing infrastructure to deliver faster Internet to our homes, and a new generation of hosting companies were born. Those companies that got in early parceled out the Internet into what it is today—high-speed toll roads leading to gated communities. Decades later, when the problems with this model became noticeable to the everyday user, the FCC began to assert itself as regulator by establishing Net Neutrality, which limited the Internet Service Provider’s (ISP) legal right to control the direction and speed of Internet traffic for the everyday user. President Barack Obama called it, “…a victory for the millions of Americans who made their voices heard in support of a free and fair Internet” (2016).

That victory was short-lived. When those regulations were rolled back by the FCC in 2017, it was done under the banner of freedom as well. The title of the press release that introduced the rollbacks read: “CHAIRMAN PAI CIRCULATES DRAFT ORDER TO RESTORE INTERNET FREEDOM AND ELIMINATE HEAVY-HANDED INTERNET REGULATIONS” (Pelkey 2017). Then, in 2022, the FCC announced plans to support legislation to make Net Neutrality regulations law. Chairwoman Jessica Rosenworcel said, “…everyone should be able to go where they want and do what they want online without their broadband provider making choices for them. I support Net Neutrality because it fosters this openness and accountability” (Perez 2022). That legislation, introduced that year in the House (H.R. 8573) and Senate (S.4676), which “expressly classifies broadband internet as a telecommunications service rather than an information service for purposes of regulation by the Federal Communications Commission,” never advanced in either chamber (Markey and Matsui 2021). As a result of this stall, the FCC under Rosenworcel’s Chairship voted to regulate the ISPs in three specific ways: 1) By prohibiting ISPs from blocking, throttling, or engaging in paid prioritization of lawful content, 2) By empowering the FCC with the discretion and ability to revoke the authorizations of foreign-owned broadband operators, and 3) By empowering the FCC to monitor and intervene in service outages (FCC 2024). Of course, the battle for a free internet didn’t end there. In January, 2025, the FCC’s jurisdiction of oversight was struck down in federal court (Bowman 2025).

Through all of this, the battle over Net Neutrality has been defended and opposed under the banner of freedom—on one side the freedom banner protects the everyday consumer/end user so that they can visit any law-abiding website without restrictions or throttling, and on the other the freedom banner protects the ISP’s interests to compete in an open marketplace, where, as traffic controller, the ISP can direct end users and restrict or promote sites and content in their business interest. But the Net Neutrality battle isn’t actually limited to these two sides—the FCC v. the ISPs—and at risk of defending those should-be telecommunications companies that deliver the Internet to our homes, this simplified two-sided battle distracts from another group of traffic controllers who have been given a pass. In caring for our public Internet, the deeper critique in this essay is of the true winners of the Internet land grab. Content curators like Google (Alphabet) and Facebook (Meta) are not, by the strict definition, ISPs, nor are they telecommunications companies, and so they would not have answered to the FCC even if the 2024 order had been upheld by the federal court. Instead, these commerce companies abide by the regulations set by the FTC despite their role as the replacement for traditional radio and television broadcast stations, charged with the responsibility to curate the daily news and entertainment options for 5.5 billion users each day while restricting access to those that do not pay with their personal data (Statista 2022).

In caring for the public Internet, this essay critiques precisely how the Internet was built with tax dollars and then given away to private content curators from whom the public now must rent. References made to the Internet throughout this essay are not meant to indicate published content in/on/across the Internet, but there should be some understanding that freedom of access to the Internet and freedom to access the information therein are coupled. While it is intuitive to assume access applies to both the same, there are different public and private partnerships with stakes in one or the other, and sometimes both, so their access challenges often differ. For example, Google does not have the ability to restrict access to a URL on the World Wide Web. A user can navigate directly to a website; however, most users search, even using the address bar to do so, and are subject to omissions made by the search engine in the result. On the other hand, the Internet Corporation for Assigned Names and Numbers (ICANN) is the international governing body of the domain names and has the technical power to revoke a website name from public access through its Uniform Domain-Name Dispute-Resolution Policy. However, its regulatory power is restricted to enforcing the basic rules of the registry, such as cybersquatting and trademark violations (ICANN 2016). As an international governing body, ICANN is one example of the existing structure already in place to enforce Internet regulation, but apart from the FCC’s push for Net Neutrality, there is no U.S. regulation for how companies, such as Alphabet or the ISP, direct, divert, dissuade and restrict users as they navigate.

The Public’s Internet

“The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. … Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them” (Hertzfield 2019).

Much has been published on the Internet’s roots, through amusing intra-government memos on over-the-network etiquette, its truncated first transmission message “LO”[i], and its ties to the U.S. Military’s first air defense system S.A.G.E. Despite its establishment by the U.S. Department of Defense, the original Internet known as ARPANET (Advanced Research Projects Agency Network) was a means to tie together the nation’s most powerful computers at various research institutions. In short, in its origin, the Internet had no commercial appeal:

It is considered illegal to use the ARPANet for anything which is not in direct support of Government business … Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANet (Stacy 1982).

While ARPA held oversight over its own network, it did not deter private companies from copying the technology, and copy they did:

A number of U.S. companies have also procured or are procuring private corporate networks utilizing many of the techniques developed for ARPANET. For instance, it was recently announced that Citibank of New York City has constructed (by contract to BBN) a private network very similar to the ARPANET. …A number of companies have taken advantage of the fact that the ARPANET technology is in the public domain to obtain the listings of the ARPANET software. (Bolt, Beranek and Newman Inc. 1981).

By 1980 taxpayers had invested billions of dollars in the Internet’s infrastructure, through research grants to public universities and the RAND Corporation from ARPA, the National Science Foundation and other government entities, but the handoff to private corporations was not formalized for another decade. In 1991 the High Performance Computing Act appropriated more than $1.5 billion from the National Science Foundation to “serve as the primary source of information on access to and use of the Network” (Commerce, Science, and Transportation, and Gore 1991). With the research directive still prevalent, computer science programs at UCLA, MIT, Stanford, Wisconsin and others received substantial funding toward the collective goal to provide high speed internet to the public. ISPs emerged and remained attached to the regulatory structure that came with the funding. But there were pockets of research that fell beyond the scope of ARPA—namely the organization and curation of the content published on the Internet and the potential profits in collecting and selling user data that were still untapped. While private ISPs worked closely with government partners to make the internet accessible to the everyday user, another group developed websites to host the traffic that was coming.

Google: Popularity is not Accuracy

Google began its success nearly two decades ago with this now infamous public promise to itself: “Don’t be Evil”—a mantra that helped the company become the most trusted search engine in the world. But what on earth did it mean? The phrase floated around Google in its early days, where buzzwords like accuracy, transparency and democracy were thrown around in every meeting. The phrase gained enough traction to be included in the young company’s code of conduct until 2015 when it restructured under Alphabet, Inc. Eric Schmidt attributes the phrase as “invented by Larry and Sergey” and talks about it often, including in his book, How Google Works, co-authored with Jonathan Rosenberg. Schmidt makes a strong case that it was perhaps a legitimate foundation for a code of conduct still in place at Google, or maybe not. As a guest on NPR’s quiz show “Wait Wait Don’t Tell Me!” in 2013, Schmidt remembers a conversation with an engineer, as an example of this sincerity:

Well, it was invented by Larry and Sergey. And the idea was that we don’t quite know what evil is, but if we have a rule that says don’t be evil, then employees can say, I think that’s evil. Now, when I showed up, I thought this was the stupidest rule ever, because there’s no book about evil except maybe, you know, the Bible or something. So what happens is, I’m sitting in this meeting, and we’re having this debate about an advertising product. And one of the engineers pounds his fists on the table and says, that’s evil. And then the whole conversation stops, everyone goes into conniptions, and eventually we stopped the project. So it did work (NPR.org 2013).

So it did work, says Schmidt. And perhaps it did in some way create a subtle check at the brainstorming sessions, or perhaps it could have even been internalized by programmers and designers, who may have resisted subtle changes pressed by their sales wing. Perhaps. We can only know anecdotally what Google chose not to do, yet we can take a careful look at what it has done. In an interview for Logic magazine, Fred Turner, a prolific critic of cyberlibertarianism and tech utopianism, said:

About ten years back, I spent a lot of time inside Google. What I saw there was an interesting loop. It started with, “Don’t be evil.” So then the question became, “Okay, what’s good?” Well, information is good. Information empowers people. So providing information is good. Okay, great. Who provides information? Oh, right: Google provides information. So you end up in this loop where what’s good for people is what’s good for Google, and vice versa (Turner and Weigel 2017).

At the heart of the mantra is not whether Google is good or evil in abstraction, but that they curate what is good and evil for their users. Looking back at the company’s foundation, Brin and Page wrote in their famous paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, that PageRank would improve search quality, which is described really as keyword accuracy:

“Junk results” often wash out any results that a user is interested in. In fact, as of November 1997, only one of the top four commercial search engines finds itself (returns its own search page in response to its name in the top ten results). … Indeed, we want our notion of “relevant” to only include the very best documents since there may be tens of thousands of slightly relevant documents. This very high precision is important even at the expense of recall (the total number of relevant documents the system is able to return) (Brin and Page 2012).

The notion of “quality” is the first hint in the original writings that PageRank could quickly get caught between two competing goals: most accurate and most popular. The word “accurate” appears only twice in the document (and accuracy does not appear). First in reference to anchor text as providing more accurate descriptions than the pages themselves, and second to criticize a web user’s lack of specificity in keyword searches (“some argue that on the web, users should specify more accurately what they want and add more words to their query”) (Brin and Page 2012). Neither of these address the question, is quality accuracy, and is accuracy quality? But Brin and Page did not seek to answer this question in the original document, perhaps relying on a public trust—whatever their answer, it won’t be evil.

At the heart of Brin and Page’s famous paper is the argument that PageRank will bring order to the Web, and certainly it has done that if the public consensus is the indicator. Google’s search engine is the most popular in the world, with 80% of the desktop market share (Statistica Research Department 2022). Credit is due to the Internet pioneers like Tim Berners-Lee and John Postel, who organized the underlying system upon which Brin and Page could organize, and credit is due to Brin and Page for discovering that hyperlinks are the lexicon of the web and can be used not just as a map of the entire globe, but to create a hierarchy for all pages. What PageRank did that had not been accomplished previously, was determining the value of pages on the web by how they relate to one another. In essence, this is the voting system that determines which webpages are “the best”. As Brin and Page explain it in their paper,

These maps allow rapid calculation of a web page’s “PageRank”, an objective measure of its citation importance that corresponds well with people’s subjective idea of importance. Because of this correspondence, PageRank is an excellent way to prioritize the results of web keyword searches. For most popular subjects, a simple text matching search that is restricted to web page titles performs admirably when PageRank prioritizes the results (demo available at google.stanford.edu). For the type of full text searches in the main Google system, PageRank also helps a great deal (Brin and Page 2012).

They go on to explain the algorithm, which weighs and thus ranks pages according to not only the amount of links to the page but again the quality. This is the root of the democracy, as each page is a voter and is also in the pool to be voted for, and this is objective somehow. An earlier paper Brin and Page wrote with their advisor at Stanford, Terry Winograd, places this idea of PageRank’s inherent objectivity in the abstract, placing all subjective interpretation on users alone. It reads:

The importance of a Web page is an inherently subjective matter, which depends on the reader’s interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a method for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them (Brin et al. 1998).

Their very idea of a popularity ranking metric, which is measured in a way that “corresponds well with people’s subjective idea of importance” really means that each page has been cited by another page in the form of a hyperlink. And if we take a step back and consider that a person created the hyperlink, we are now in the loop that Fred Turner described. Who made the link? Someone that knows how to make links. Who decided how important that link was? Google. But the question this essay seeks to answer is not whether Google’s own judgment of evil is a proper measure, or even whether there is a notion of good and bad on which we’d settle. Making an argument for Google as good or evil would have to include their lesser-known contributions, like DeepMind, or the mobile Wi-Fi surveillance probe built into the Google Maps car, or lawsuits that charge the company with giving “unfair preference” to their own services and subsidiaries over those of their rivals. And we would have to talk about censorship and surveillance, both at the company’s own discretion and in cooperation with its regulators and partnership. All that, including the ethics of simultaneously controlling the information hierarchy and the ad revenue—AdWords and AdSense, which work hand-in-hand with PageRank—would be a long discussion of what is good, and what is evil, indeed. There is already a great deal of opinion on “Don’t be Evil” in popular media as well, so it’s safe to say that testing the morality of the mantra has been covered. What has hardly been touched, though, is the public trust in which Google is deeply embedded. Despite the widespread exposure of “Don’t be Evil”, there is some agreement, as indicated in number of users, that Google is trustworthy. It may be trust in accuracy, speed, convenience, or something else, but trust is the right word. The democracy that it is founded on, according to the original workings of PageRank, might even be a symbol of U.S. citizen trust in democracy itself. Many users may not know or consider the implications of the PageRank’s dependence, but there is significant implicit trust in its democracy, if the market share is the indicator.

But it’s important to note here that at best it’s a misunderstanding that Google is democratic, and it is not entirely clear that this was ever its purpose. It was certainly its key to the success of PageRank, but it was only the foundation. In addition to targeted search results weighing heavily on top of PageRank, the myth of the “objective” voting system has been trusted for two decades. Only in the wake of the 2016 election did the public really begin to take seriously the question: is the democracy rigged? Carole Cadwalladr reports on an ad hoc test in The Guardian, allowing Google’s autocomplete function to guide her in toward the most popular/accurate/quality results. She started with a simple keyword and allowed autocomplete to choose for her, and the results are shocking. She writes:

Google is knowledge. It’s where you go to find things out. And evil Jews are just the start of it. There are also evil women. I didn’t go looking for them either. This is what I type: “a-r-e w-o-m-e-n”. And Google offers me just two choices, the first of which is: “Are women evil?” I press return. Yes, they are. Every one of the 10 results “confirms” that they are, including the top one, from a site called sheddingoftheego.com, which is boxed out and highlighted: “Every woman has some degree of prostitute in her. Every woman has a little evil in her… Women don’t love men, they love what they can do for them. It is within reason to say women feel attraction but they cannot love men” (Cadwalladr 2016).

Cadwalladr hoped these were not the most popular/accurate/quality results, so she contacted Google and received the following response.

Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs – as a company, we strongly value a diversity of perspectives, ideas and cultures (Cadwalladr 2016).

Jonathan Albright, Director of the Digital Forensics Initiative at the Tow Center for Digital Journalism, studied this too. He created a list of 306 widely circulated fake news sites and followed the lexica, just as PageRank was designed to do (Albright et al. 2017). Essentially Albright revealed through the hyperlinks that there had been a vast movement to manipulate PageRank’s popularity-based results to favor this subset of pages. Understanding how this is done is the key to breaking any illusion that the PageRank democracy is representative of popular opinion. Pages vote for one another by the amount and quality of hyperlinks, which means, in oversimplified terms, the creator of the link submits the vote. Albright’s experiment shows clearly that democracy can be rigged, or even automated. This subset contained 23,000 pages and 1.3 million hyperlinks. It is very unlikely these represent the popular vote of people making the pages, and even more unlikely that it resembles the popular opinion of the searching public. Add to this the more recent and increasing deployment of Artificial Intelligence to aid in content curation and the immediate creation of webpages that are included in search results, and it is clear that Brin’s and Page’s original ideas about organizing the Internet by either popularity or democracy are dead.

Facebook: Move Fast and Break Things

Hierarchies of information are big business, as Google has proven. Like traveling, the business of digital information is not in the destination. It’s the journey. Page visits are the blue ribbon for the web. The almighty click has stripped away any attention to content itself. Facebook has a famous sign that hangs in its office. It is big red lettering that says, “Move Fast and Break Things.” Indeed, the symbol of the company’s take on “Don’t be Evil.” Facebook was, for at least the four years following its launch, a community closed to advertisers. This meant that content circulated within the community by account holders, posting on their own behalf more or less. Sharing information this way, whether it was news articles, cat pictures, or political opinions, could be traced back to a user. When Facebook launched its Initial Public Stock Offering on May 18, 2012, the community-curated content dynamic broke. Facebook transformed its entire platform and mission from giving “people the power to build community and bring the world closer together” into an advertising company “making the world more open and connected.” Under Meta, Facebook logs more than 2 billion daily active users (Dixon 2022). Facebook is popular for reasons that should be obvious by now; a cult of personality that so effectively brings like minds together with individualized pseudo-authority to “friend”, “like”, “unfollow”, “block.” This may be the source of the widespread success of content curation that has seated Meta among the top 10 most valuable companies in the world, but the content is no longer managed by the community of active users any more than the search results over at Google. The newsfeed now contains ads from outside the friend circle, and the ever-changing “Trending” section consists of popular news, selected by a concoction of user likes and shares, and Meta’s magic dust. What was once an exclusive friends network, with the “.edu” email address as its user criterion origin story, is now an advertiser-consumer matchmaking app. It is spelled out plainly in Meta’s Transparency Center:

Facebook’s goal is to make sure you see posts from the people, interests, and ideas that you find valuable, whether that content comes from people you’re already connected to or from those you may not yet know. When you open Facebook and see Feed in your Home tab, you experience a mix of “connected content” (e.g., content from the people you’re friends with or are following, Groups you’ve joined, and Pages you’ve liked) as well as “recommended content” (e.g., content we think you’ll be interested in from those you may want to know). We also show you ads that are tailored to you (Meta Platforms 2024).

As an aside, when auto-generating a citation for this webpage, the result is on point: “‘Log in or Sign up to View.’ n.d. Transparency.meta.com.”‌ In short, the introduction of advertisers into the closed community of Facebook has sparked the downward spiral that we are struggling to reverse. Advertisers inside the social circle means an exchange of data, but it is not a free exchange. The data flows overwhelmingly in one direction. As we converse, like and share, the advertiser listens. 

The Gated Community

As the leading two aggregators of unprecedented amounts of market research, these two companies effectively direct and manage what is accessible on the World Wide Web without having to take part in the ongoing battle for a neutral net. And, while the two companies gained credibility and user loyalty through long held outspoken advocacy of free and accessible information, their business models are now based almost exclusively on restriction. Users are contractually restricted to access only curated monetized content through their services, in exchange for opting in to a vast digital infrastructure of behavioral analytics. Most of the world accesses the Internet through Alphabet and Meta, having opted-in to participate as subjects of for-profit behavioral analytics, and we have been lured through their gateways on foundational promises of democracy and free information.

Despite these false promises, younger generations on social media may have never experienced a free Internet, where their clicks were not tracked, and we see the window for such a freedom actually shrinking further. ISPs have always been privy to our data, but have not been allowed to monetize it as the curators have. With the rollbacks on Net Neutrality protections, the ISPs could join the data free-for-all, but their entry is late in the game. Alphabet and Meta continue to expand the transactional design of their Internet, tightening the terms under which we all surf, and locking off the information they curate behind a login screen—the gated community.

Putting this together, everyday users must pay an ISP to access the Internet, then exchange personal data to search it, then log in to view and interact with one another on the most popular social media websites. There are still niche social and search companies that allow users to interact without the paywall, but the majority of users choose to pay the toll road to enter the gated community. Users still can choose to rent or purchase a domain in order to share their own intellectual property without having to grant permissions for the hosting party to monetize it, but so many users instead choose to post original content through their social media accounts, where the owners of those servers are within their users agreements to harvest and sell it.

The question of free space is just part of it. There is also the question of free information, which has been the main subject of this essay. When posting on social media, for example, we are led to believe that what we write is sent out to our friends, but we know that the property owner will decide that for us. Having a personal account on a social site is comparable to renting a house because it lets you be with your friends, but the landlord prohibits curtains, enters without asking, and sometimes takes your stuff. The restriction of information is at the discretion of the landlord, and there is no obligation, implied or written, that we have a role in this decision process. All information has restriction inherently, too. We know that if we click on a news essay, for instance, that publisher is under similar constraints. They must pay to let you enter their gate, come in their home and eat their food. A paywall is often how they do that, or agreeing to the cookies statement that pops up, or whitelisting to allow ads on your screen. The question emerges, if the information wasn’t free in newspapers, why do we expect it to become free on the Internet? In some ways, that’s a fair question, but what is unfair is the introduction of new gates to the community, each with their own tolls and taxes. It is not only the access point at which we arrive at the information. It is the service-oriented process of finding it. At one of the gates to the Internet community, you are asked, “what are you looking for?” The answer we give is the search query. You are not only given directions to the content (that would be the URL). We are given the door itself…the link directly to the information. The search engines do not promise to help us find the information. They promise that they have already found it for us. This has the makings of a utopia indeed, the world at our fingertips! But the search transaction is the exact opposite really. The queries deliver us to the information, not the information to us. The choice is narrowed down, organically we are told, so that we can choose from the very best sources related to our search. This is the trust we have in search. It is not that we used their service to find what we’re looking for within a myriad of information, it is that we were delivered to their preferred information based on the words we typed. That’s what targeting is.

It is safe to say that these companies have made the rules that suit their needs, and our choice is between their way or the highway. Opt-in culture is comparable to an entry fee at a movie theater, but you have to keep paying while you’re watching the movie. Given the backward progress of Net Neutrality, which may be an idea of the past, it is difficult to see a path that puts the freedoms of the everyday user over those of the ISPs and content curators. Though a solution that protects the everyday user’s freedom to use their public Internet feels quite out of reach at this time, it is at least worth declaring that there is one, and it is attainable, if only on the technical side. The solution to the problem is two-fold: First, the FCC can restore and extend privacy protections that were approved by the FCC in October 2016 but repealed under then chairman, Ajit Pai, in March 2017, however the cycle could continue if future chairpersons choose to roll them back again. Instead of limiting oversight power to the FCC, Congress can revisit The Net Neutrality and Broadband Justice Act of 2022, and this has the potential to solidify Net Neutrality into law. Second, Net Neutrality should go beyond the ISP, into the domains, where the content curators monetize the clicks. ICANN already acts as a licenser in assigning domains, and so it can enforce as licenser in accordance with Net Neutrality law, which should include a standard for “basic service” for companies in the business of mass information dissemination.

For the first solution, it’s important to make a distinction between the end-user license agreements (EULA) and privacy policies. EULA are not mandatory by law, and terms of service are fairly broad and unregulated. The most common types are explicit (clickwrap) and implicit (browser wrap). EULAs are common on the web and ubiquitous in OS software and mobile apps. While equally hostile and unreadable to the end-user, it is more pressing that a solution be found within the privacy policy documents, as they must by law directly address data collection and use. Privacy laws are already subject to federal regulations, however there is almost no regulation in effect at this time. Further, a user has no option to opt-out. That option would mean that you choose not to use the service in any capacity. Considering the requirements from employers for e-mail, as the most obvious example, the forceful nature of opting-in to keep your job or do your homework reveals the high costs of the information access hierarchy. As mentioned, a gradual step toward protecting individual privacy was made by the FCC in 2016, and should be expanded. The Broadband Consumer Privacy Rules, (approved October 2016 then repealed 2017) separated the standard all-or-nothing Opt-in agreement into the following:

  • Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information. The rules specify categories of information that are considered sensitive, which include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications.
  • Opt-out: ISPs would be allowed to use and share non-sensitive information unless a customer “opts-out.” All other individually identifiable customer information – for example, email address or service tier information – would be considered non-sensitive and the use and sharing of that information would be subject to opt-out consent, consistent with consumer expectations.
  • Exceptions to consent requirements: Customer consent is inferred for certain purposes specified in the statute, including the provision of broadband service or billing and collection. For the use of this information, no additional customer consent is required beyond the creation of the customer-ISP relationship.

Obviously, these rules aren’t broad enough to shake us free from Alphabet’s and Meta’s information headlock, but it is the place to start. The ability to select from tiers of service might sound problematic, since it is actually adding another hierarchy on top of the hyperlink hierarchy Google has put in place for us. However, the tiers of service has an immediate and measurable improvement to all-or-nothing opt-in privacy agreements because, like initialing each page of a legal document to indicate it has been read, incremental agreement options mean more opportunities to stop and think. Alphabet and Meta have made improvements to their privacy policies, at least in transparency, but both companies retain full power to restrict content from those that will not allow their personal data to be sold. It is still, in essence, a strong-arm agreement. Regulation could ensure that access to any site is independent of that site’s own policy document. In other words, a universal agreement that grants access to all sites on the World Wide Web.

For the second solution we must create and enforce a standard for “basic service” for companies in the business of mass information dissemination. Given that the underlying infrastructure of the World Wide Web has always depended on public funding, it would be consistent with the investment to regulate mass information disseminators that utilize the infrastructure for private profits. There is precedent for this. Newton Minow proposed that networks have an obligation to serve the public interest in his famous “Vast Wasteland” speech to the National Association of Broadcasters in 1961. Specifically, a universal set of “basic services” must be publicly accessible with a default “opt-out” privacy agreement. For example, a person not logged in to any account could utilize Google’s search engine with the inherent agreement that their search habits may not be shared or sold. Some may argue further, that under a default opt-out agreement the data should not even be logged.

The enforcement model, too, is in place. Indirectly, the FCC must establish “basic services” for mass information disseminators and work directly with ICANN to enforce it. In simplest terms, the information contained behind the gates of these companies must be accessible without entering. Facebook’s login page is the most striking example. It is a moat around a castle, and the only drawbridge is your login. ICANN, which has the authority to restrict domains for registration violations, can be expanded beyond trademark and general uniformity of the domain names. The organization’s internal governance structure needs modification if it is to become an enforcing body. It currently consists of “governments and international treaty organizations, root server operators, groups concerned with the Internet’s security and the ‘at large’ community, meaning average Internet users” (ICANN 2014). When it was established in partnership with the U.S. government, it included a mandate that it must operate in a bottom-up and democratic manner. However, ICANN has stated repeatedly in public meetings that input from a global community is not amenable to the Board. In addition, ICANN has not conducted its “Conflicts of Interest and Ethics Practices Review” since 2012, and gives no indication that it intends to schedule another review (ICANN 2023).

Due to its impartial governance structure, under the proposed solution ICANN should remain limited to the management of the domain register, but could be directed by the FCC to restrict domains when for-profit mass information disseminators would violate a “basic services” mandate. It should be said, at this time, that this proposal does not take lightly the role of FCC oversight historically and looking to the future. It is crucial that the FCC return to its charge of regulating the venues of public information, far from which it has strayed.

Dr. Adam Dean is the Program Director for Communication and Media at LMU. He holds a BA in Media Studies from Penn State University, an MA in Radio, Television and Film from the University of North Texas, and a PhD in Media, Art and Text from Virginia Commonwealth University. Before joining the faculty at LMU Dr. Dean taught Digital Media Arts at Barry University in Miami while working professionally as an Adobe Certified Expert for CBS and Univision. His research and professional work focus on digital democracy and include creative projects that bring students and community partners together to produce documentaries, podcasts and other educational media.

Bibliography

“Accountability Mechanisms – ICANN.” n.d. www.icann.org. https://www.icann.org/resources/pages/mechanisms-2014-03-20-en.

Albright, Jonathan, Janna Anderson and Rainie Lee. 2017. “The Future of Free Speech, Trolls, Anonymity and Fake News Online.” Pew Research Center: Internet, Science & Tech. March 29, 2017. https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/.

“Board of Directors’ Code of Conduct.” 2023. www.icann.org. January 21, 2023. https://www.icann.org/en/governance/code-of-conduct.

Bolt, Beranek & Newman Inc., A History of the ARPANET: The First Decade (Report prepared for DARPA). Apr. 1, 1981. https://ia600108.us.archive.org/15/items/DTIC_ADA115440/DTIC_ADA115440.pdf.

Bowman, Emma. 2025. “Net Neutrality Is Struck, Ending a Long Battle to Regulate ISPs like Public Utilities.” NPR. January 3, 2025. https://www.npr.org/2025/01/03/nx-s1-5247840/net-neutrality-fcc-struck.

Brin, Sergey, and Lawrence Page. 2012. “Reprint Of: The Anatomy of a Large-Scale Hypertextual Web Search Engine.” Computer Networks 56 (18): 3825–33. https://doi.org/10.1016/j.comnet.2012.10.007.

Brin, Sergey, Lawrence Page,  Rajeev Motwani and Terry Winograd. 1998. “The PageRank Citation Ranking: Bringing Order to the Web.” Technical Report. Stanford InfoLab. January 29, 1998.

Cadwalladr, Carole. 2016. “Google, Democracy and the Truth about Internet Search.” The Guardian. December 4, 2016. https://www.theguardian.com/technology/2016/dec/04/google-democracy-truth-internet-search-facebook.

Commerce, Science, and Transportation, and Albert Gore. Bill, High-Performance Computing Act of 1991 §. S.272 (1991).

Dixon, S. 2022. “Number of daily active Facebook users worldwide as of 2nd quarter 2022.” Statista. https://www.statista.com/statistics/346167/facebook-global-dau/.

“FCC Restores Net Neutrality.” 2024. Fcc.gov. April 25, 2024. https://www.fcc.gov/document/fcc-restores-net-neutrality.

Gallagher, S. 2022. “50 Years Ago Today, the Internet Was Born. Sort Of.” Ars Technica, October 29, 2019. https://arstechnica.com/information-technology/2019/10/50-years-ago-today-the-internet-was-born-sort-of/.

Internet Live Stats. 2022. “Google Search Statistics.” Internetlivestats.com. https://www.internetlivestats.com/google-search-statistics/.

Katz, Ian. 2012. “Web Freedom Faces Greatest Threat Ever, Warns Google’s Sergey Brin.” The Guardian. April 15, 2012. https://www.theguardian.com/technology/2012/apr/15/web-freedom-threat-google-brin.

Markey, E. 2021. “S.4676 – 117th Congress (2021-2022): Net Neutrality and Broadband Justice Act of 2022.” Congress.gov. 2021. https://www.congress.gov/bill/117th-congress/senate-bill/4676.

Matsui, D. 2021. “H.R.8573 – 117th Congress (2021-2022): Net Neutrality and Broadband Justice Act of 2022.” Congress.gov. 2021. https://www.congress.gov/bill/117th-congress/house-bill/8573.

Meta Platforms. “Our Approach to Facebook Feed Ranking.” Transparency.meta.com, 19 Dec. 2024, transparency.meta.com/features/ranking-and-content/.

 NPR.org. n.d. 2013. “Google Chairman Eric Schmidt Plays Not My Job.” May 11, 2013. https://www.npr.org/2013/05/11/182873683/google-chairman-eric-schmidt-plays-not-my-job.

Obama, Barack. 2016. “Net Neutrality: A Free and Open Internet.” The White House. June 14, 2016. https://obamawhitehouse.archives.gov/net-neutrality.

Pelkey, Tina. 2017. “CHAIRMAN PAI CIRCULATES DRAFT ORDER to RESTORE INTERNET FREEDOM and ELIMINATE HEAVY-HANDED INTERNET REGULATIONS.” Federal Communications Commission. November 21, 2017. https://www.fcc.gov/document/chairman-pai-proposes-restore-internet-freedom.

Perez, Paloma. 2022. “CHAIRWOMAN ROSENWORCEL STATEMENT ON NET NEUTRALITY LEGISLATION.” Federal Communications Commission. July 28, 2022. https://www.fcc.gov/document/chairwoman-rosenworcel-statement-net-neutrality-legislation.

 Statistica Research Department. 2022. “Worldwide desktop market share of leading search engines from January 2010 to July 2022.” https://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/

Stacy, Christopher C. “Getting Started Computing at the AI Lab.” MIT Artificial Intelligence Laboratory, September 7, 1982. https://dspace.mit.edu/bitstream/handle/1721.1/41180/AI_WP_235.

“Uniform Domain-Name Dispute-Resolution Policy – ICANN.” 2016. Icann.org. 2016. https://www.icann.org/resources/pages/help/dndr/udrp-en.

[i]                 On October 29, 1969, Leonard Kleinrock successfully transmitted the first message over the ARPANET from UCLA to the Stanford Research Institute. The message was “LO” while the intended message was “LOGIN”, interrupted by a computer crash.