Speech Or Silence: Big Tech And Free Speech Go Before The Supreme Court
How Should The Courts Rule On Section 230?
With perhaps less fanfare than the case deserves, the Supreme Court yesterday heard arguments for and against social media’s historic liability shield regarding comments and content made by users.
The Supreme Court agreed on Monday to decide whether social media platforms may be sued despite a law that shields the companies from legal responsibility for what users post on their sites. The case, brought by the family of a woman killed in a terrorist attack, argues that YouTube’s algorithm recommended videos inciting violence.
The case, Gonzalez v. Google, No. 21-1333, concerns Section 230 of the Communications Decency Act, a 1996 law intended to nurture what was then a strange and nascent thing called the internet. Written in the era of online message boards, the law said that online companies are not liable for transmitting materials supplied by others.
The case represents the first time the Supreme Court has agreed to weigh in on a statute many consider foundational to the growth of the Internet as we know it today, and helped power the rise of social media giants such as Google, Facebook, and Twitter.
Enacted when Facebook founder Mark Zuckerberg was just 11 years old and Google’s creation still two years off, Section 230 is seen as a fundamental law of the internet and considered inviolable by its staunch defenders.
Section 230 was part of the Communication Decency Act, an anti-pornography law signed in 1996, that helped set the rules of the road for the internet, which was still in its infancy as an online playground for all.
The idea was to protect the then embryonic internet sector from cascading lawsuits and to allow it to flourish, while encouraging tech companies to moderate their content.
As the first substantial review of Section 230 by the Court, the case has the potential to substantially redefine both the scope of free speech protections for online content as well as the degree to which Big Tech censorship is permissible. For that reason alone, this is a case that is of immediate concern to everybody.
To understand the case being put before the Court, we should begin with the text of the statute itself, Section 230 of Title 47 of the US Code.
The particular section the Court is being asked to review is subsection (c)(1)1:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This distinction forms an important liability shield for internet service providers and social media platforms, as it precludes them from ever being held liable for the social media postings made by users of those platforms. While a social media user or content provider can be found liable for defamation or other objectionable speech, the social media platform itself is beyond the reach of such liability by this statute2.
Section 230(c)(1) of the CDA states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Section 230(f) defines “interactive computer service” as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server;” and defines “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information....” Thus, for instance, while one could hold a YouTube video uploader (an information content provider) liable for defamation, one could not hold YouTube (an interactive computer service) liable as a “publisher or speaker” of that video, because, under CDA § 230(c)(1) the video is “information provided by another information content provider”—i.e. the video uploader.
This principle seems simple enough, and arguably is common sense. Neither Google nor Facebook nor Twitter have any direct connection to any user or provider of content on their respective platforms, and the Supreme Court has already ruled, in Packingham v. North Carolina3 that such platforms, aka “cyberspace”, form a sort of “digital commons” for all to be able to access freely.
A fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more. The Court has sought to protect the right to speak in this spatial context. A basic rule, for example, is that a street or a park is a quintessential forum for the exercise of First Amendment rights. See Ward v. Rock Against Racism, 491 U. S. 781, 796 (1989) . Even in the modern era, these places are still essential venues for public gatherings to celebrate some views, to protest others, or simply to learn and inquire.
While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace—the “vast democratic forums of the Internet” in general, Reno v. American Civil Liberties Union, 521 U. S. 844, 868 (1997) , and social media in particular. Seven in ten American adults use at least one Internet social networking service. Brief for Electronic Frontier Foundation et al. as Amici Curiae 5–6. One of the most popular of these sites is Facebook, the site used by petitioner leading to his conviction in this case. According to sources cited to the Court in this case, Facebook has 1.79 billion active users. Id., at 6. This is about three times the population of North America.
Distinguishing platforms from publishers is, in this doctrine, nothing more than statutory recognition of the value of this technological reality4.
Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.
Congress passed this bipartisan legislation because it recognized that promoting more user speech online outweighed potential harms. When harmful speech takes place, it’s the speaker that should be held responsible, not the service that hosts the speech.
Section 230’s protections are not absolute. It does not protect companies that violate federal criminal law. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims.
There is, however, a wrinkle to the case: Google/YouTube uses various algorithms to “recommend”, or highlight certain content based upon a number of parameters of a social media content consumer (as does Facebook and Twitter). These recommendations form the basis of the litigation against Google5, which is being sued by the family of Nohemi Gonzalez, a college student murdered during an ISIS terror attack in Paris in November, 2015.
The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during the November 2015 terrorist attacks, which also targeted the Bataclan concert hall. The family’s lawyers argued that YouTube, a subsidiary of Google, had used algorithms to push Islamic State videos to interested viewers, using the information that the company had collected about them.
Google’s algorithms are at the core of the case being brought before the Court, as laid out in the plaintiffs petition for writ of certiorari.
In the decades since the enactment of section 230, the types of services being provided on the internet have changed dramatically. The issue in this case, as
in Force and Dyroff, arose out of one of those fundamental changes. Today the income of many large interactive computer services is based on advertising, not
on the subscriptions that in years past were the basis of the income firms such as Prodigy and CompuServe. Internet firms that rely on advertising have a compelling interest in increasing the amount of time that individual users spend at their websites. The longer a user is on a website, the more advertising the user will be exposed to; that in turn will increase the revenue of the website operator.That financial structure has given rise to the now widespread practice of recommending (for want of any agreed upon better term) material to website users, in the hope of inducing them to look at yet more material and thus to remain ever longer on that website. Many of those recommendations are based on algorithms, which review all the information an interactive service provider has about each particular user, and selects for recommendation the material in which that user is most likely to be interested. “[A]lgorithms [are] devised by these companies to keep eyes focused on their websites.... ‘[T]hey have been designed to keep you online’....” App. 97a n.3 (Gould, J., dissenting) (quoting Anne Applebaum, Twilight of Democracy—The Seductive Lure of Authoritarianism (1st ed. 2020)). Algorithm-based recommendations have been enormously successful, and thus lucrative. As Judge Katzmann noted, one analysis concluded that 70% of the time that users spend on YouTube is the result of YouTube’s algorithm-based recommendations. 934 F.3d at 87.
The complaint alleged that that assistance and aid took several forms. Plaintiffs asserted that Google had knowingly permitted ISIS to post on YouTube hundreds of radicalizing videos inciting violence and recruiting potential supporters to join the ISIS forces then terrorizing a large area of the Middle East, and to conduct terrorist attacks in their home countries. Additionally, and central to the issue in this appeal, the complaint alleged that Google affirmatively “recommended ISIS videos to users.” Third Amended Complaint, ¶ 535. Those recommendations were one of the services that Google provided to ISIS. Google selected the users to whom it would recommend ISIS videos based on what Google knew about each of the millions of YouTube viewers, targeting users whose characteristics indicated that they would be interested in ISIS videos. Id., ¶¶ 535, 549, 550. The selection of the users to whom ISIS videos were recommended was determined by computer algorithms created and implemented Google. Because of those recommendations,
users “[we]re able to locate other videos and accounts related to ISIS even if they did not know the correct identifier or if the original YouTube account had been
replaced....” Id., ¶ 549.
The short version: the plaintiffs argue that Google did (and does) more than just provide a platform for content. In their argument, Google takes concrete steps that help provide the audience as well, and thus should not receive the protection imputed by Secton 230(c)(1).
Unsurprisingly, Google rejects this interpretation both of the law and of the function of their algorithms.
The question presented asks whether section 230 applies to interactive computer service providers’ “targeted recommendations” of third-party content. Pet. i. As petitioners (at 5) acknowledge, the circuits are not divided on that issue. This Court recently denied petitions for certiorari by the same petitioners’ counsel in the two other cases to raise this issue. Force, 140 S. Ct. 2761; Dyroff, 140 S. Ct. 2761. And this Court has denied certiorari in at least 20 cases, from most circuits, raising
broader section 230 issues. The Court should deny this petition as well. Nothing
has strengthened the case for certiorari since this Court last denied review. Indeed, this case would be a worse vehicle to address section 230 than those two previous
petitions. The only development petitioners (at 6) identify is the decision below, which reached the same result as the earlier Ninth Circuit decision in Dyroff, 934 F.3d 1093,and the Second Circuit decision in Force, 934 F.3d 53. The count thus remains 2–0.Further, the decision below is correct. Section 230 bars claims that treat websites as publishers of third-party content. Publishers’ central function is curating and displaying content of interest to users. Petitioners’ contrary reading contravenes section 230’s text, lacks a limiting principle, and risks gutting this important statute.
Fundamentally, Google’s position is that its recommendation algorithms do not amount to content curation, and thus Google is still not a publisher but merely an interactive computer service as defined within Section 230(f)(2):
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
This is not the first time social media algorithms have come under legal scrutiny. In 2021, Congress contemplated several bills which would limit or even eliminate much of the legal protections afforded by Section 230, based on the nature and function of a platform’s algorithms.
Lawmakers have spent years investigating how hate speech, misinformation and bullying on social media sites can lead to real-world harm. Increasingly, they have pointed a finger at the algorithms powering sites like Facebook and Twitter, the software that decides what content users will see and when they see it.
Some lawmakers from both parties argue that when social media sites boost the performance of hateful or violent posts, the sites become accomplices. And they have proposed bills to strip the companies of a legal shield that allows them to fend off lawsuits over most content posted by their users, in cases when the platform amplified a harmful post’s reach.
However, even within the proposed bits of legislation from 2021, achieving a coherent and legally supportable understanding of social media and search algorithms proved to be a complex undertaking.
Algorithms are everywhere. At its most basic, an algorithm is a set of instructions telling a computer how to do something. If a platform could be sued anytime an algorithm did anything to a post, products that lawmakers aren’t trying to regulate might be ensnared.
Some of the proposed laws define the behavior they want to regulate in general terms. A bill sponsored by Senator Amy Klobuchar, Democrat of Minnesota, would expose a platform to lawsuits if it “promotes” the reach of public health misinformation.
Ms. Klobuchar’s bill on health misinformation would give platforms a pass if their algorithm promoted content in a “neutral” way. That could mean, for example, that a platform that ranked posts in chronological order wouldn’t have to worry about the law.
Other legislation is more specific. A bill from Representatives Anna G. Eshoo of California and Tom Malinowski of New Jersey, both Democrats, defines dangerous amplification as doing anything to “rank, order, promote, recommend, amplify or similarly alter the delivery or display of information.”
Free speech advocacy groups such as the Electronic Frontier Foundation filed amicus curiae briefs supporting Google’s position in the case. The EFF in particular argues in its amicus filing that a platform’s algorithms are little more than the cyberspace equivalent of a print newspaper’s discretionary decision on article placement, use of fonts, et cetera, decisions which have long been considered protected under the First Amendment.
In Gonzalez v. Google, the petitioning plaintiffs make a radical argument about Section 230. They have asked the Supreme Court to rule that Section 230 doesn’t protect recommendations we get online, or how certain content gets arranged and displayed. According to the plaintiffs, U.S. law allows website and app owners to be sued if they make the wrong recommendation.
In our brief, EFF explains that online recommendations and editorial arrangements are the digital version of what print newspapers have done for centuries: direct readers’ attention to whatever might be most interesting to them. Newspapers do this with article placement, font size, and use of photographs. Deciding where to direct readers is part of editorial discretion, which has long been protected under the First Amendment.
On the other side are arguments advanced by groups such as the National Police Foundation, which has a much darker view of social media and thus argues in its amicus curiae brief in support of the petitioners for greater content regulation and less freedom.
Social media are known means to promote and enable radicalization and violence (Part I), and police are suffering an epidemic of social-media fueled attacks
(Part II). Yet this and similar cases were rejected on § 230 grounds and for not establishing an “act of international terrorism” and proximate cause, all needing
correction (Part III). Section 230(c)(1) should be construed to not protect social-media recommendations (III.A) and “act of international terrorism” and proxi-
mate-cause standards should be clarified and adjusted in light of the harm posed by terrorist and radical groups (III.B-C), all of which will help damp anti-LEO
attitudes and attacks.
Suffice it to say, one or the other of these views will likely prevail when the Court issues its final ruling later this year.
Yet there is a perverse irony to this case: with the focus entirely on Section 230(c)(1), it completely overlooks the far more noxious subsequent passage (c)(2)6, which explicitly gives social media platforms complete legal immunity for actively censoring and suppressing content.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
The omission is glaring on both sides. Not only do the petitioners not argue that Section 230(c)(2) creates an affirmative duty to restrict various content7—which duty Google would notionally have failed to perform—but the argument of the EFF in its amicus brief is predicated on the presumption that, should the petitioners prevail before the Court, the end result will be a “censored Internet”:
If Courts Narrow Section 230, We’ll See A Censored Internet
If the plaintiffs’ arguments are accepted, and Section 230 is narrowed, the internet as we know it could change dramatically.
First, online platforms would engage in severe censorship. As of April 2022, there were more than 5 billion people online, including 4.7 billion using social media platforms. Last year, YouTube users uploaded 500 hours of video each minute. Requiring pre-publication human review is not feasible for platforms of even moderate size. Automated tools, meanwhile, often result in censorship of legal and valuable content created by journalists, human rights activists, and artists. Many smaller platforms, unable to even access these flawed automated tools, would shut down.
The flaw in the EFF’s argument is simply this: we already have a censored Internet, as I can personally attest, having been banned from LinkedIn for posting CDC statistics regarding the COVID-19 pandemic.
Nor am I alone in this, as Internet commentators such as Steve Kirsch can attest.
And long before “health misinformation” was the boogeyman du jour, notable online personalities such as Stefan Molyneaux were being deplatformed for having the “wrong” ideas or saying the “wrong” things.
The reality of a censored Internet was conclusively established when Elon Musk released the “Twitter files”, and which Senator Ted Cruz has announced will be the basis of hearings before the Senate Commerce Committee into Big Tech censorship.
Cruz said, “[I]n my role as Ranking Member of the Commerce Committee, today, I launched a full investigation into big tech censorship, where we are going to take the Twitter files, we’re going to take what Elon Musk has made public and use that as a roadmap to go after Facebook, to go after Google, to go after YouTube, to go after TikTok, to go after all of big tech that is trying to silence conservatives. And we are going to bring accountability, we’re going to bring transparency, we’re going to shine a light and expose their collusion with Democrats and the deep state to try to silence conservatives.”
If the argument for preserving the protections of Section 230(c)(1) is to prevent a censored Internet, those protections have already failed us at every turn.
My argument has always been rhetorically simple: Free Speech is a moral imperative. Our duty to ourselves and to our fellow man is not to silence any speech, but to meet objectionable speech with alternative speech, to challenge bad ideas and toxic rhetoric with good ideas and positive rhetoric.
Accordingly, I reject even the existence of concepts such as “hate speech” and “misinformation”. These labels ultimately are merely a dodge to avoid engaging in debate with those whom we personally dislike.
Even the petitioners to this case acknowledge that Google enjoys Section 230 immunity for allowing ISIS-related content on its YouTube platform, yet they paradoxically want to argue that Google should not receive Section 230 protection for algorithmically recommending this same allowed content. This argument is simply irrational. Speech cannot be protected in one sense and unprotected in another; it is either protected or it is not protected—and under the First Amendment, all speech is protected.
We should also note that the petitioners, while arguing that the content in question facilitated ISIS recruitment and thus the spread of terrorism, are not arguing the content in question went so far as to incite an imminent lawless action, which Brandenburg v Ohio8 established as the necessary threshold for transforming notionally objectionable speech into potentially criminal acts.
Freedoms of speech and press do not permit a State to forbid advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.
Brandenburg is not even mentioned within their writ of certiorari. If the content is not claimed to incite imminent lawless action, there is no foundation for arguing the content crosses over from Constitutionally protected speech into potentially tortious and actionable conduct.
If the content in question was not so objectionable as to raise legitimate questions about its potential to incite imminent lawless action, it is difficult to see how even a traditional publisher could be held to account for making that content available, let alone a social media platform or interactive computer service.
However, we should not overlook the reality that the functional effect of Section 230 as a whole is not to facilitate free speech on the Internet, but to inhibit and restrict it. When (c)(1) and (c)(2) are taken together—and the plain construction of the statute demands that they be taken together—they are nothing less than an unrestricted license for Big Tech to arbitrarily censor however they will. And censor they have, repeatedly.
Regardless of the stated legislative intent of Section 230, that has been its practical consequence. Despite all assertions to the contrary, Section 230 is at its core the one thing the First Amendment explicitly disallows: a law abridging the freedom of speech.
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
By immunizing social media platforms from any liability for suppressing ostensibly objectionable speech (which the title of subsection (c) explicitly claims as its purpose), Congress effectively weaponized social media platforms to become Orwellian enforcers of not merely “correct” speech but also “correct” thought—which Big Tech gets to arbitrarily and unilaterally define. We have only to consider the innumerable examples of Big Tech censorship (which greatly exceed the capacity of a single Substack article to document), as well as the documented collusion on display within the Twitter Files between Big Tech and elements of the Federal Government to see that this is indeed the case.
Thus the greatest irony of all in this case is what the Constitutionally correct decision should be: The reason Google should not be held liable for the ISIS content is that Section 230 is itself unconstitutional. The ISIS content itself was and is speech, and thus should automatically receive the full gamut of First Amendment protections. As the content itself is protected by the First Amendment, so too must be any and all recommendations, algorithms, or any other form of content curation. No liability can rationally attach when the speech itself is categorically protected by the First Amendment.
At a minimum, (c)(2) is clearly unconstitutional and needs to be excised from US law—at which point the utility of (c)(1) becomes unclear.
The Internet would indeed benefit from a law immunizing social media platforms from any claims of legal liability arising from user-posted content, while debarring those same platforms from censoring and suppressing content they arbitrarily deem “objectionable”. Unfortunately, Section 230, as currently written, is not that law.
Sharp-Wasserman, J. “Section 230(c)(1) of the Communications Decency Act and the Common Law of Defamation: A Convergence Thesis”. Science and Technology Law Review, vol. 20, no. 1, Jan. 2019, doi:10.7916/stlr.v20i1.4770.
Electronic Frontier Foundation. Section 230. https://www.eff.org/issues/cda230.
That argument is actually part of the substance of another pending Supreme Court Case, Twitter, Inc., et al., v. Mehier Taamneh, et al (Docket 21-1496), which will be heard Wednesday, February 22, 2023.
Interesting that Google via DDG took forever and I received multiple DNS site can't be reached errors trying to like your post...
This is fabulous "My argument has always been rhetorically simple: Free Speech is a moral imperative. Our duty to ourselves and to our fellow man is not to silence any speech, but to meet objectionable speech with alternative speech, to challenge bad ideas and toxic rhetoric with good ideas and positive rhetoric." Thank you!
I hope the government is found guilty for colluding with social media companies in their noxious attempts to censor what they deem to be "medical misinformation." But what will happen then? Certainly no one will go to prison as they should. At best some kind of fine? I am not optimistic that the guilty parties will receive just punishments.