Speech Or Silence: SCOTUS Declines To Review Section 230
Big Tech's Censorship License Is Left Intact...For Now.
In his poem “The Hollow Men”, T. S. Eliot concluded
This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.
Similarly, it appears that seemingly momentous cases before the Supreme Court end not with a deft ruling and a moment of legal clarity, but rather an anticlimactic kicking of a legal can down the jurisprudential road.
Such was the case yesterday in the twin rulings by the Supreme Court in Gonzalez v Google1 and Twitter v Taamneh2, both which pointedly declined to even take up the questions of free speech and censorship raised in their respective petitions for certiorari.
But in a unanimous opinion by Justice Clarence Thomas, the court ruled that the connection between what the social media companies did and the attack was "far removed." The families, Thomas wrote, "failed to allege that defendants intentionally provided any substantial aid" or "otherwise consciously participated" in the Istanbul attack.
The questions of the validity and applicability of what is now generally known as “Section 230”3—an abbreviated reference to the Communications Decency Act, Section 230, a law passed in 1996 intended to shield platforms from liability for the noxious and potentially tortious speech of their users—were not taken up at all by the Court. The rulings leave Section 230 very much intact, but also leave potential Constitutional challenges to the law very much intact.
In sum, the rulings were to maintain the status quo. For this reason, advocates on both sides of the Section 230 debate should not read too much into these rulings.
In an interesting bit of role reversal, while much of the initial discussion in the media centered on the Google case, the Court opted to present the substance of their legal reasoning in Twitter v Taamneh. However, as both cases cover the same legal terrain, the discussion of one arguably becomes the discussion of the other—which was ultimately the Court’s reasoning in issuing a short per curiam ruling dismissing the Google4 case and remanding it to the lower courts.
We need not resolve either the viability of plaintiffs’ claims as a whole or whether plaintiffs should receive further leave to amend. Rather, we think it sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail under either our decision in Twitter or the Ninth Circuit’s unchallenged holdings below. We therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief. Instead, we vacate the judgment below and remand the case for the Ninth Circuit to consider plaintiffs’ complaint in light of our decision in Twitter.
In both cases, however, a key contention against Big Tech’s liability shield via Section 230 was that, as their algorithms highlighted and putatively promoted pro-ISIS content, neither Google nor Twitter was entitled to Section 230 protection.
The short version: the plaintiffs argue that Google did (and does) more than just provide a platform for content. In their argument, Google takes concrete steps that help provide the audience as well, and thus should not receive the protection imputed by Secton 230(c)(1).
In Twitter v Taamneh, and thus in Gonzales v Google by imputation, the Court declined to even take up the Section 230 question, choosing instead to focus on other aspects of the legal claims presented. Perhaps unsurprisingly, corporate media is spinning this as a “clear win” both for Big Tech and for the durability of Section 230 overall.
The court’s decision to sidestep Section 230 in the decisions is a victory for Google and other social media companies, who argued that any change to the provision could upend the Internet, leaving companies exposed to lawsuits over their efforts to police offensive posts, photos and videos on their service.
Corporate media does get this much correct: In both rulings the Supreme Court pointedly declined to examine any aspect of Section 230.
Depending on how the Supreme Court handled it, the Gonzalez v. Google case in particular had the potential to change legal interpretations of Section 230 of the Communications Decency Act, weakening the law shielding tech companies from liability for content uploaded by their users. Because the court decided that the tech companies were not liable for the Islamic State’s actions under existing anti-terrorism law, it did not delve further into reexamining Section 230’s protections around those issues.
Indeed, at the time of oral arguments, a number of observers including myself presumed that the Section 230 aspects of both cases would be the key issues addressed by the Court.
Suffice it to say, one or the other of these views will likely prevail when the Court issues its final ruling later this year.
Suffice it to say, we were all wrong.
It is worth noting that Alex Berenson takes a narrower view of the Court’s ruling, opting to view the rulings as an assessment of the relatively weak legal arguments being presented:
That’s because the Court - rightly - found the two lawsuits were too weak on their own merits to proceed. In both cases, the families of Islamist terror victims claimed the companies had “aided and abetted” the attacks by hosting videos from the Islamic State and other terrorists. But the families did not allege that the companies had done anything but show videos which some of the terrorists might have seen.
Berenson somewhat predictably then hints that his own litigation regarding Twitter will be more impactful and dispositive towards the future of Section 230.
So it remains to be seen what the Court will do in a case when the companies were neither “passive” nor “largely indifferent” to the way they treated users - or when they faced federal pressure to take action, at the apparent risk of losing their precious 230 protection.
Like Berenson v Biden.
Whether Berenson’s own case will prove to be the grand statement on Free Speech he hopes it to be remains to be seen. I do not have a comment to make on his case and am not going to visit that topic here.
However, we should also note that the Court’s ruling in Twitter v Taamneh, while declining to take up Section 230 explicitly, does offer us some indication of how the court might view future Section 230 challenges.
A key contention in Twitter was that Twitter was liable for damages under the Justice Against Sponsors Of Terrorism Act (JASTA)5, which allowed victims of terror attacks to pursue claims against anyone who “aids and abets” an act of terrorism6.
In an action under subsection (a) for an injury arising from an act of international terrorism committed, planned, or authorized by an organization that had been designated as a foreign terrorist organization under section 219 of the Immigration and Nationality Act (8 U.S.C. 1189), as of the date on which such act of international terrorism was committed, planned, or authorized, liability may be asserted as to any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.
In the Google case, the “aiding and abetting” was notionally the result of Google’s algorithmic “recommending” of ISIS-related content, whereas in Twitter it was the presumed failure of Twitter to properly police its platform to remove such content.
Yet while both cases specifically raised Section 230 in their petitions for certiorari, Justice Thomas’ ruling opted instead to address the substance of the core claim under 18 USC 2333(d)(2)7.
As always, we start with the text of §2333. See Bartenwerfer v. Buckley, 598 U.S. 69, 74 (2023). Here, that text immediately begs two questions: First, what exactly does it mean to “aid and abet”? Second, what precisely must the defendant have “aided and abetted”?
As the legislative history of JASTA made specific reference to an earlier federal case involving extended liability, Halberstam v. Welch8, Thomas gives a fairly lengthy discourse on that case to establish the framework for understanding the terms “aid and abet”.
The key conclusion Thomas reaches9 in his discourse on Halberstam is that the proper framework for assessing the terms “aid and abet” is a common-law understanding of the principles.
We therefore must ascertain the “basic thrust” of Halberstam’s elements and determine how to “adap[t]” its framework to the facts before us today. See 705 F. 2d, at 478, 489, and n. 8. To do so, we turn to the common law of aiding and abetting upon which Halberstam rested and to which JASTA’s common-law terminology points.
This becomes the fulcrum for Thomas’ reasoning, for the common-law understanding of “aid and abet” has always carried a presumption of limitation10.
Importantly, the concept of “helping” in the commission of a crime—or a tort—has never been boundless. That is because, if it were, aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance. For example, assume that any assistance of any kind were sufficient to create liability. If that were the case, then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police. Yet, our legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue. See 1 W. LaFave, Substantive Criminal Law §6.1 (3d ed. 2018) (LaFave); W. Keeton, D. Dobbs, R. Keeton, & D. Owen, Prosser and Keeton on Law of Torts 373–375 (5th ed. 1984) (Prosser & Keeton). Moreover, both criminal and tort law typically sanction only “wrongful conduct,” bad acts, and misfeasance. J. Goldberg, A. Sebok, & B. Zipursky, Tort Law: Responsibilities and Redress 31 (2004). Some level of blameworthiness is therefore ordinarily required. But, again, if aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer. And those who merely deliver mail or transmit emails could be liable for the tortious messages contained therein. See Restatement (Second) of Torts §876, Comment d, Illus. 9, p. 318 (1979) (cautioning against this result).
In order to be found to “aid and abet” a criminal act, one must have some definitive connection to the related criminal enterprise.11
To keep aiding-and-abetting liability grounded in culpable misconduct, criminal law thus requires “that a defendant ‘in some sort associate himself with the venture, that he participate in it as in something that he wishes to bring about, that he seek by his action to make it succeed’ ” before he could be held liable.
Consequently, in order for any action under JASTA to proceed, the litigant has to show some definitive connection between the alleged “sponsor” of terrorism and at least one specific terrorist act12.
To summarize the requirements of §2333(d)(2), the phrase “aids and abets, by knowingly providing substantial assistance,” points to the elements and factors articulated by Halberstam. But, those elements and factors should not be taken as inflexible codes; rather, they should be understood in light of the common law and applied as a framework designed to hold defendants liable when they consciously and culpably “participate[d] in” a tortious act in such a way as to help “make it succeed.” Nye & Nissen, 336 U. S., at 619 (internal quotation marks omitted). And the text requires that defendants have aided and abetted the act of international terrorism that injured the plaintiffs—though that requirement does not always demand a strict nexus between the alleged assistance and the terrorist act
Neither Gonzalez v Google nor Twitter v Taamneh established any such strict nexus13.
At bottom, then, the claim here rests less on affirmative misconduct and more on an alleged failure to stop ISIS from using these platforms. But, as noted above, both tort and criminal law have long been leery of imposing aiding-and-abetting liability for mere passive nonfeasance. To show that defendants’ failure to stop ISIS from using these platforms is somehow culpable with respect to the Reina attack, a strong showing of assistance and scienter would thus be required. Plaintiffs have not made that showing.
Because neither case showed a definitive linkage between the social media sharing of ISIS-generated content and ISIS-generated acts of terrorism, even before a consideration of Section 230 is undertaken, both cases have already failed as liability under JASTA has not been established. With or without the liability protections of Section 230, neither Twitter nor Google could be held liable on the legal theories advanced in these cases, because in neither case had Google or Twitter been shown to have actually “aided and abetted” ISIS.
Yet this stance also hearkens to another legal doctrine that can be applied to notional speech in such cases—the Brandenburg14 test.
This was a point I had made when the cases first went before the Supreme Court for oral argument—neither case argued that the content in question incited imminent lawless action.
We should also note that the petitioners, while arguing that the content in question facilitated ISIS recruitment and thus the spread of terrorism, are not arguing the content in question went so far as to incite an imminent lawless action, which Brandenburg v Ohio established as the necessary threshold for transforming notionally objectionable speech into potentially criminal acts.
Freedoms of speech and press do not permit a State to forbid advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.
Brandenburg is not even mentioned within their writ of certiorari. If the content is not claimed to incite imminent lawless action, there is no foundation for arguing the content crosses over from Constitutionally protected speech into potentially tortious and actionable conduct.
Although Thomas argues the distinction in light of JASTA and 18 USC 2333(d)(2), the same premise still holds—as the content itself was only tangential to the acts of terror giving rise to the claims in both cases, there was no incitement to imminent lawless action, which again precludes any imputation of liability arising from the specific terrorist acts at issue to the noxious content which was shared on social media.
Without even the Brandenburg test at issue in either case, not only is speech not an issue, but neither can there be a claim for secondary liability.
Justice Ketanji Brown Jackson, in her concurring opinion in Twitter, suggests this is indeed the case15.
I join the opinion of the Court with the understanding that today’s decisions are narrow in important respects. In this case and its companion, Gonzalez v. Google, 598 U. S. ___ (2023) (per curiam), the Court has applied 18 U. S. C. §2333(d)(2) to two closely related complaints, filed by the same counsel. Both cases came to this Court at the motion-to-dismiss stage, with no factual record. And the Court’s view of the facts—including its characterizations of the social-media platforms and algorithms at issue—properly rests on the particular allegations in those complaints. Other cases presenting different allegations and different records may lead to different conclusions.
The Court also draws on general principles of tort and criminal law to inform its understanding of §2333(d)(2). General principles are not, however, universal. The common-law propositions this Court identifies in interpreting §2333(d)(2) do not necessarily translate to other contexts.
While both Thomas in delivering the opinion of the Court and Jackson in her concurrence specifically decline to speak to any aspect of Section 230, that declination very specifically leaves the door open to future consideration of the matter, presumably with a case better suited for the discussion.
As I have stated before, Free Speech is a moral imperative. This is the beginning of all my reasonings on this topic.
For this reason my hope when these cases reached the Supreme Court was that the Court would correctly see that Section 230, far from being a defense of free speech and a bar to Internet censorship, is in fact nothing short of a license to proactively and intentionally censor any and all content, as it is an explicit indemnification for so doing.
Yet I am still hopeful that the logic of Thomas’ Twitter ruling, seconded by Jackson’s concurrence, points to an eventual rejection of Section 230, because Thomas’ basic assessment of the applicable law parallels the view I want to see prevail—that where an act of speech is not itself either a tortious or criminal act, nor is an incitement to any form of lawless action, no liability can possibly attach. Thomas rejects the application of secondary liability to both Google and Twitter, because in neither case did the litigants show that the noxious content in question was itself any manner of tortious or criminal act.
In neither case was a violation of the Brandenburg rule asserted. Similarly, in neither case was there an established connection between the noxious content and an identifiable terrorist act for which secondary liability could be imputed. In both logic chains, the required linkage between the noxious content and the relevant tortious or criminal act is simply not there.
As I have stated previously, the flaw of Section 230 is that it gives Big Tech legal immunity when it censors content. It provides no new incentive for social media platforms to refrain from catering to loud political minorities who want to see dissenting views silenced and eradicated.
For now, Big Tech’s immunity is left intact. Unfortunately, their censorship license will continue for at least a little while yet.
Fortunately, the likely foundations for a future successful challenge to Section 230 are left intact as well.
In the end, these cases are a victory for Big Tech, but not for Section 230.
Gonzalez v. Google LLC, 598 U.S. ___ (2023)
Twitter, Inc. v. Taamneh, 598 U.S. ___ (2023)
Gonzalez v. Google LLC, 598 U.S. ___ (2023)
Justice Against Sponsors Of Terrorism Act, Pub. L 114-222, 130 Stat 852 (2016) Government Printing Office, https://www.govinfo.gov/content/pkg/STATUTE-130/pdf/STATUTE-130-Pg852.pdf.
Twitter, Inc. v. Taamneh, 598 U.S. ___ (2023)
Halberstam v. Welch, 705 F.2d 472
Twitter, Inc. v. Taamneh, 598 U.S. ___ (2023)
Ibid.
Ibid.
Ibid.
Ibid.
Brandenburg v. Ohio, 395 U.S. 444 (1969)
Gonzalez v. Google LLC, 598 U.S. ___ (2023) (Jackson, K. concurring opinion)
Well-reasoned. You would have made a great jurist, too, Mr. Kust!
Well of course they did.
And I agree with Gbill7.