Private Platforms, Public Speech: The Aftermath of Moody v. NetChoice

Source: Stock Images

Misinformation, privacy concerns, and freedom of speech are among the growing concerns the public has regarding the reach of social media and, in particular, its future governance and regulation. Just over a year ago, Florida Attorney General Ashley Moody filed a lawsuit against NetChoice, LLC, a trade association representing major tech companies such as Amazon, Google, Meta, X (formerly Twitter), TikTok, and others. The case hinged on the argued unconstitutionality of two state laws in Florida and Texas, SB 7072 and HB 20, respectively, which prohibited social media platforms from “deplatforming” political candidates, restricted moderation based on viewpoint, and included assurance of disclosure and reporting transparency.

These laws emerged in the wake of a divisive 2020 U.S. presidential election, in which platform moderation of political misinformation, the restriction and eventual banning of President Donald Trump from those platforms, and perceived suppression of primarily conservative viewpoints became major talking points in political discourse over the legality of editorial discretion. Arguments for this type of regulation frame these drastic actions as an important advance in protecting free speech, preventing viewpoint discrimination, and preserving democratic discourse in an ideological landscape that seems ever more malleable in the hands of powerful media corporations that frequently skirt regulations and accountability.

However, the eventual case decision clarified that the First Amendment protects entities engaged in expressive activities, including the curation of others’ speech, from being forced to include messages they would rather not. In short, the government cannot justify interfering with a private enterprise’s editorial choices by merely claiming an interest in enriching the common marketplace of ideas. Strong precedent would agree. Miami Herald Publishing Co. vs. Tornillo established that the government cannot compel newspapers to publish a political candidate’s reply to any newspaper-published “attack” on their character or record. Hurley vs. GLIB made it clear that private citizens cannot be forced to include groups with a message they do not wish to convey. As private enterprises, social media platforms are afforded the same rights as noted in these prior cases involving private expression of ideas. However, given that these principles cannot be applied uniformly across the moderation policies of all platforms, the case was remanded back to lower courts, leaving the legal battle over social media governance far from settled.

Now, state governments are forced to grapple with the very real threats posed by transparency and content moderation on social media platforms, as well as the legal implications of attempting to control and improve them.

States have begun to implement various initiatives with varying degrees of success. Programs in New York and California have begun imposing additional requirements on content removal, incentivizing online platforms to implement supplemental strategies to moderate dangerous, illegal, false, and harmful information, and imposing heavy penalties if content is not taken down. Measures in Georgia, Ohio, and California are beginning to require platforms to submit regular reports detailing their content moderation methodologies in response to terms-of-service violations in order to reveal their internal practices and increase transparency.  Kentucky, Oklahoma, and Iowa have begun penalizing businesses that violate new content regulations by restricting access to tax incentives and public contracts. Tennessee and Colorado have established regulatory bodies to oversee various activities carried out by social media companies. Connecticut and Virginia have allocated resources for the legislature and appointed commissions to study and investigate digital service practices. The policy landscape at the state level for social media content moderation has been, to date, messy, ineffective, and inefficient, highlighting the turbulent nature of the legal space and the need for doctrinal clarification.

Additionally, the use of AI and algorithmic amplification, a central component of most social media platforms in providing recommendations and moderating content, is an unexplored legal frontier. The two tools work together: an algorithm boosts high-engagement content that ranges from entertaining to emotionally charged and polarizing, while AI moderation filters out misinformation and harmful content. Now, automated tools handle most of the moderation, speeding up enforcement and expanding its reach, but raising serious concerns about biased, indiscriminate censorship. A 2022 study by Fiesler et al. highlighted how Black users faced higher rates of account suspension for vague content violations of community guidelines. YouTube, the premier video-sharing platform, removed thousands of videos documenting violence in Syria that were flagged as inappropriate by an automatic system designed to identify extremist content. Though the videos were restored, the noted disproportionate censorship of marginalized groups due to biases in automated systems is an issue that raises another host of questions regarding whether recommendations are equivalent to expressions of opinion or whether or not states can regulate algorithms and AI systems without infringing upon editorial discretion.

As these platforms continue to evolve at an alarming rate, the Supreme Court's decisions, even from as recently as two years ago, deserve reexamination. It is clear that the court protects editorial discretion, but these platforms have made it clear that they are quite different from traditional, editorially responsible entities due to their use of algorithmic curation, their inherent massive scale and speed, their engagement-driven paradigm, noted ambiguity surrounding their role as public forums, and an exhibition of network-like characteristics. As these platforms shift from passive bulletin boards to active editors and curators of user-generated content, does this change in role complicate traditional First Amendment theory? Two years ago, the courts decided they did not. Now, as social media platforms continue to struggle to balance content moderation with free expression in ways traditional media has not, the debate surely deserves to be revisited. The challenge has evolved from a simple deterministic problem of constitutional protection to how that protection will shape the developing architecture of digital regulation.

Isaiah Sohn is a sophomore at Brown University studying Applied Mathematics-Economics. He is a writer for the Brown Undergraduate Law Review and can be contacted at isaiah_sohn@brown.edu.

Ashley Park is a sophomore at Brown University studying Cognitive Neuroscience and Political Science. She is an editor for the Brown Undergraduate Law Review and can be contacted at ashley_h_park@brown.edu.