The Limits of Free Speech in Social Media
Brett M. Pinkus Partner, Wick, Phillips, Gould & Martin, LLP
The public seems to have a fundamental misunderstanding about the true extent of “freedom of speech” under the First Amendment. Who can or cannot restrict free speech? What type of speech can be restricted? And how does this apply to speech restrictions on social media platforms which have become so prevalent?
Lawsuits alleging free speech violations against social media companies are routinely dismissed. The primary grounds for these dismissals are that social media companies are not state actors and their platforms are not public forums, and therefore they are not subject to the free speech protections of the First Amendment. Consequently, those who post on social media platforms do not have the right to free speech on these social media platforms. This article will attempt to explain the relationship between social media and free speech so that we can understand why.
Who Can Restrict Free Speech - State v. Private Actors The overarching principle of free speech under the First Amendment is that its reach is limited to protections against restrictions on speech made by the government.¹ The text of the First Amendment itself only prevents Congress (i.e., U.S. Congress) from making laws that restrict the freedom of speech. This protection is extended to the states, and to local governments, through the State Action Doctrine and the Due Process Clause of the Fourteenth Amendment.²However, under the State Action Doctrine, First Amendment restrictions traditionally do not extend to private parties, such as individuals or private companies.³ In other words, a private person or private company (such as a social media company) cannot violate your constitutional free speech rights, only the government can do so. That is, unless the private party attempting to restrict speech qualifies for one of the three exceptions to the State Action Doctrine.
The first exception is when an action to restrict speech by a private party involves a function that is traditionally and exclusively reserved for the State, which is known as the Exclusive Public Function Doctrine.⁴ The Exclusive Public Function Doctrine is limited to extreme situations where a private party has stood in the shoes of the state. For example, when a private company has been given control of a previously public sidewalk or park, it has been found that the private company is performing municipal powers exclusively performed by the state.⁵ Courts have repeatedly refused efforts to characterize the provision of a news website or social media platform as a public function that was traditionally and exclusively performed by the government.⁶
The second and third exceptions, which are related to each other, are the entanglement and entwinement exceptions. The entanglement exception applies when an action to restrict speech by a private party is such that the state has significantly involved, or entangled itself, with the private action.⁷ This occurs when the “power, property, and prestige” of the government is behind the private action, and where there is evidence of the overt, significant assistance of state officials.⁸ The entwinement exception applies when an action of a private party can be treated as though the action were of the government itself (i.e., overlapping identities).⁹ These exceptions are rarely used in free speech cases and apply in very limited situations, typically in cases involving the Equal Protection or Establishment Clauses which are not relevant in most social media contexts.
Where Can Speech Be Restricted - Public v. Private Forums When speech takes place in a public forum, that speech can qualify for protection of speech under the First Amendment.¹⁰ This is known as the Public Forum Doctrine. While there is no constitutional right for a person to express their views in a private facility (such as a shopping center),¹¹ speech that takes place in a traditional or designated public forum for expressive activity (such as a sidewalk or park on government property) is protected and only limited restrictions of speech are allowed.¹² A designated public forum can only be created when the government intentionally opens a nontraditional forum for public discourse.¹³ A private forum (such as a grocery store or comedy club), however, does not perform a public function by merely inviting public discourse on its property.¹⁴
Social media platforms are often characterized as a digital public square. Yet, courts have repeatedly refused arguments that social media platforms are public forums subject to the First Amendment.¹⁵ This reasoning is justified because their networks are private, and merely hosting speech by others does not convert a private platform to a public forum.¹⁶ Only in limited cases have social media sites been found by courts to qualify as a public forum. For example, in a recent case, an appellate court held that the official Twitter page operated by then President Donald Trump was a designated public forum. As a result, government officials could not engage in viewpoint discrimination by blocking individuals from posting comments with critical views of the President and his policies.¹⁷ In contrast, a private person or organization’s social media page is not a public forum and is not protected by the First Amendment.
Social media platforms may also be analogized to newspapers when they attempt to exercise editorial control and judgment over the publishing of users’ posts. In this scenario, the Supreme Court has held that newspapers exercise the freedom of the press protected by the First Amendment and cannot be forced to print content they would not otherwise include.¹⁸ This is due to a newspaper’s ability to exercise editorial control and judgment, including making decisions on the size and content of the paper, along with treatment of public issues and public officials (whether such treatment is fair or unfair). This leads us to next examine what protections are afforded to social medial companies for content posted by their users on their platforms.
Social Media’s Immunity for User Content - 47 U.S.C. § 230(c) Section 230 of the Communications Decency Act (“CDA”), codified as 47 U.S.C. § 230, was enacted in response to a court decision ruling that an internet service provider, Prodigy, was considered a “publisher” of defamatory statements that a third party had posted on a bulletin board hosted and moderated by Prodigy, and Prodigy could therefore be subject to a civil lawsuit for libel.¹⁹ Sec. 230(c)(1) remedies this by providing immunity to internet service providers from lawsuits that attempt to make them liable for the user content posted on their sites.²⁰ Social media companies, which are currently considered to be service providers under Sec. 230(c)(1), are broadly protected from responsibility for what users say while using their social media platforms.²¹
The next question that logically follows is whether a social media company can restrict or exercise editorial control over content on its platform. Sec. 230(c)(2) of the CDA answers this, by precluding liability for decisions to remove or restrict access to content that the provider deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”²² Social media platforms therefore set their policies and Terms and Conditions to state that they can remove violent, obscene, or offensive content and can ban users who post or promote such content. For example, Facebook, Twitter, and YouTube have banned terrorist groups that post material promoting violence or violent extremism, and have also banned ISIS, Al Qaeda, and Hezbollah solely because of their status as U.S.-designated foreign terrorist organizations. As was recently seen following the 2020 Presidential election, Facebook, Twitter, Snapchat, YouTube (Google), Reddit, and Twitch (Amazon) also justified their suspension of the accounts of President Trump and some of his supporters under Sec. 230(c)(2) for continuing to post misinformation, hate speech, and inflammatory content about the election.
What are Permissible Restrictions on Speech As discussed above, if a social media company chooses to remove content from its platform in accordance with its designated policies, that removal does not raise a First Amendment issue and there is no civil liability as a result of Sec. 230 of the CDA. But what if precedent was to be reversed, and a social media platform was declared a state actor or a public forum such that the First Amendment would apply to them? Or what if Sec. 230 was repealed to make social media companies liable for their users’ posts when they attempt to moderate the content? If either were to happen, the type of speech being restricted would play a significant role in its permissibility.
Restrictions of speech in a public forum are permissible if they are appropriately limited in time, place, and manner.²³ Speech can be restricted under a less demanding standard when it is done without regard to the content of the speech or the speaker’s point of view.²⁴ A content-neutral restriction on speech, for example, would be prohibiting all picketing within 150 feet of any school building while classes are in session without regard to their message, whereas a content-based restriction would be one that allows picketing only if the school is involved in a labor dispute.²⁵ Other reasonable content-neutral regulations include regulating noise by limiting decibels, or the hours and place of public discussion.²⁶ It is unlikely that content-neutral restrictions could be implemented to effectively regulate violent, obscene, or offensive content on social media platforms, which leaves us with content-based restrictions that would be subjected to heightened scrutiny. Content-based restrictions in a public forum require that there must be a compelling government interest in the restriction and the least restrictive means are employed to further that interest.²⁷
It is important to emphasize that the First Amendment “does not guarantee the right to communicate one’s views at all times and places or in any manner that may be desired.”²⁸ For that reason, if there is an alternative channel of communication for the desired speech, it may be a suitable alternative even if it is not a perfect substitute for the preferred forum that has been denied.²⁹ For example, if a user were blocked from posting on a social media platform, alternative channels to make the desired speech might include other social media platforms or different forms of media. Other possibilities might include remedial steps for regaining posting privileges, such as imposing temporary posting suspensions that can be lifted over time or requiring the poster to agree to specific posting restraints before regaining unrestricted access.
What Types of Content-Based Restrictions are Permitted It is also worthwhile to review the types of protected and unprotected content-based speech to understand the extent of the speech protected by the First Amendment, particularly in view of the recent unrest reflected on social media following the 2020 election. Content-based restrictions on speech have been permitted within a few traditionally recognized categories of expression.³⁰
Misinformation, Defamation, Fraud, Perjury, Government Officials Misinformation is defined as false or inaccurate information. False statements of fact about a public concern or public officials are protected from censorship under the First Amendment, unless the statement is made with knowledge or reckless disregard that it was a false statement made and/or made with intent to harm.³¹ It is not safe to assume that false statements can be made on social media platforms without impunity. There can be civil liability imposed for defamatory statements, which are knowingly false statements of fact published without authorization that damage others’ reputations (e.g., libel if written and slander if spoken), and for fraud, which is a false statement of fact made with the intent to cause the hearer to alter their position.³² At the time of this writing, statements pushing claims of election fraud following the 2020 election made by various public figures and news commentators on television and social media are being pursued for defamation by electronic voting machine manufacturers Dominion Voting Systems and Smartmatic.
Hate Speech and Speech that Incites Imminent Lawless Action The First Amendment generally protects even hate or racist speech from government censorship. However, speech advocating the use of force is unprotected when it incites or is likely to incite imminent lawless action.³³Likewise, speech that is considered an incitement to riot, which creates a clear and present danger of causing a disturbance of the peace, is also not protected by the First Amendment.³⁴ “Fighting words” which “by their very utterance inflict injury or tend to incite an immediate breach of the peace” are unprotected and may be punished or prohibited.³⁵
Harassment and True Threats of Violence Harassment refers to unwanted behavior that makes someone feel degraded, humiliated, or offended. Harassing someone for the purpose of irritating or tormenting them is protected from censorship by the First Amendment. However, harassment that goes so far as to present a “true threat of violence,” is an exception not protected by the First Amendment and is banned by all social media platforms. True threats of violence directed at a person or group of persons that have “the intent of placing the target at risk of bodily harm or death” are unprotected, regardless of whether the speaker actually intends to carry out the threat.³⁶ Intimidation “is a type of true threat,” and would likewise be unprotected by the First Amendment.³⁷
Advertisements Advertising, which is a type of commercial speech, receives only limited protection under the First Amendment.³⁸ If an advertisement is shown to be misleading or unlawful, a restriction on that speech is permissible.³⁹ A website or social medial platform, much like a newspaper, cannot be forced to print advertisements in contravention of their right of editorial control .⁴⁰
Conclusion Current legal precedent conclusively establishes that social media users do not have a right to free speech on private social media platforms. Social media platforms are allowed to remove offending content when done in accordance with their stated policies as permitted by Sec. 230 of the CDA, and that removal does not raise a justiciable First Amendment issue or a real risk of civil liability. The users, on the other hand, put themselves at risk of being banned for making violent, obscene, or offensive content on social media, and may even expose themselves to civil liability for making false, misleading, or violence-inciting statements.
Sources: ¹ Matal v. Tam , 137 S.Ct. 1744, 1757 (2017). ² United States Constitution, 1st Amendment, (“Congress shall make no law . . . abridging the freedom of speech”); Gitlow v. New York , 268 U.S. 652, 666 (1925) (applying the freedom of speech from the 1st Amendment to the states by virtue of the Due Process Clause of the 14th Amendment); Hudgens v. NLRB , 424 U.S. 507, 513 (1976) (“the constitutional guarantee of free speech is a guarantee only against abridgment by government, federal or state”). ³ Civil Rights Cases , 109 U.S. 3, 11 (1883) (“[i]t is state action of a particular character that is prohibited. Individual invasion of individual rights is not the subject-matter of the [Fourteenth] amendment.”); see also Shelley v. Kraemer , 334 U.S. 1, 13 (1948) (“[the Constitution] erects no shield against merely private conduct, however discriminatory or wrongful”). ⁴ Jackson v. Metro. Edison Co. , 419 U.S. 345, 352 (1974). ⁵ Marsh v. Alabama , 326 U.S. 501, 505–09 (1946) (a private entity operating a company town is a state actor and must abide by the First Amendment); but see Lloyd Corp. v. Tanner , 407 U.S. 551, 569 (1972) (confining Marsh ’s holding to the unique and rare context of “company town[s]” and other situations where the private actor “performs the full spectrum of municipal powers); Terry v. Adams , 345 U.S. 461 (1953) (holding public elections is an exclusive public function). ⁶ See. e.g., Prager Univ. v. Google LLC , No. 17-CV-06064-LHK, 2018 U.S. Dist. LEXIS 51000, at *26 (N.D. Cal. Mar. 26, 2018), affirmed 951 F.3d 991 (9th Cir. 2020); Quigley v. Yelp, Inc. , 2017 U.S. Dist. LEXIS 103771, at *4 (“The dissemination of news and fostering of debate cannot be said to have been traditionally the exclusive prerogative of the government.”); see Manhattan Cmty. Access Corp. v. Halleck , 139 S. Ct. 1921, 1930 (2019) (“merely hosting speech by others is not a traditional, exclusive public function”); Howard v. Am. Online Inc. , 208 F.3d 741, 754 (9th Cir. 2000) (providing internet service, web portal, and emails was not “an instrument or agent of the government.”). ⁷ Moose Lodge No. 107 v. Irvis , 407 U.S. 163, 173 (1972). ⁸ Burton v. Wilmington Parking Authority , 365 U.S. 715 (1961). ⁹ Brentwood Acad. v. Tenn. Secondary Sch. Athletic Ass'n , 531 U.S. 288, 298, 303 (2001). ¹⁰ Perry Education Association v. Perry Local Educators’ Association , 460 U.S. 37 (1983) (There are three categories of government property for purposes of access for expressive activities: (1) traditional, or quintessential, public forums (such as a sidewalk or park on government property) in which content-based restrictions on speech are highly suspect; (2) limited, or designated, public forums in which reasonable time, place, and manner regulations are permissible and content-based prohibitions must be narrowly drawn to effectuate a compelling state interest; and (3) nonpublic forums in which the government can reserve the forum for its intended purposes with reasonable regulations on speech that do not discriminate based on opposition to the speaker’s viewpoint.). ¹¹ PruneYard Shopping Ctr. v. Robins , 447 U.S. 74, 81 (1980); Prager , 951 F.3d at 998. ¹² Perry, 460 U.S. at 37 . ¹³ Cornelius v. NAACP Legal Def. & Educ. Fund, Inc. , 473 U.S. 788, 802 (1985). ¹⁴ Prager Univ. , 951 F.3d at 998. ¹⁵ See, e.g., Prager , 2018 U.S. Dist. LEXIS 51000, at *25–26 (“Defendants do not appear to be at all like, for example, a private corporation . . . that has been given control over a previously public sidewalk or park . . . .”); Estavillo v. Sony Comput. Entm’t Am. Inc. , No. C-09-03007 RMW, 2009 U.S. Dist. LEXIS 86821, at *3–4 (N.D. Cal. Sept. 22, 2009); Nyabwa v. Facebook , 2018 U.S. Dist. LEXIS 13981, Civil Action No. 2:17-CV-24, *2 (S.D. Tex.) (Jan. 26, 2018) (dismissing lawsuit filed by a private individual against Facebook by explaining that “the First Amendment governs only governmental limitations on speech); Freedom Watch, Inc. v. Google, Inc. , 368 F. Supp. 3d 30, 40 (D.D.C. 2019) (“Facebook and Twitter … are private businesses that do not become ‘state actors’ based solely on the provision of their social media networks to the public.”), affirmed, 816 Fed.Appx. 497 (D.C. Cir. 2020); see also Halleck , 139 S.Ct. at 1930 (“merely hosting speech by others … does not alone transform private entities into state actors subject to First Amendment constraints.”). ¹⁶ See cases cited supra note 16. ¹⁷ Knight First Amendment Institute v. Trump , 928 F.3d 226 (2d 2019), petition for cert. pending. ¹⁸ Miami Herald Publishing Co. v. Tornillo , 418 U.S. 241, 258 (1974) (Challenging a law giving political candidates the right to reply to criticism, the newspapers were found to exercise editorial control and to be more than a passive receptacle or conduit for news, comment, and advertising. The law violated the function of editors by forcing them to print content that they would not otherwise include.). ¹⁹ Stratton-Oakmont, Inc. v. Prodigy Services Co. , No. 31063/94, 1995 N.Y. Misc. LEXIS 229, at *1 (N.Y. Sup. Ct. May 26, 1995); compare Cubby, Inc. v. CompuServe Inc. , 776 F. Supp. 135 (S.D.N.Y. 1991) (CompuServe was found not liable for defamatory content posted by users because it allowed all content to go unmoderated and lacked editorial involvement, and, as such, it was considered a distributor rather than a publisher.). ²⁰ 47 U.S.C. § 230(c)(1); Zeran v. America Online, Inc. , 129 F.3d 327 (4th Cir. 1997), cert. denied, 524 U.S. 937 (1998); see Barnes v. Yahoo!, Inc. , 570 F.3d 1096 (9th Cir. 2009) (an internet service provider cannot be held responsible for failure to remove objectionable content posted to their website by a third party under Sec. 230(c)(1)); but see Fair Housing Council of San Fernando Valley v. Roommates.com, LLC , 521 F.3d 1157 (9th Cir. 2008) (Roomates.com was considered an information content provider, rather than a service provider, because it created or augmented content, and was ineligible for protection under Sec. 230). ²¹ See Doe v. MySpace, Inc. , 528 F. 3d 413, 420 (5th Cir. 2008). ²² 47 U.S.C. § 230(c)(2); see Murphy v. Twitter, Inc. , 60 Cal. App. 5th 12, 274 Cal. Rptr. 3d 360, 375 (2021). ²³ Perry Educ. Ass'n v. Perry Local Educators' Ass'n , 460 U.S. 37, 45 (1983). ²⁴ Clark v. Community. for Creative Non–Violence , 468 U.S. 288, 295 (1984). ²⁵ Mastrovincenzo v. City of New York , 435 F.3d 78, 101 (2d Cir. 2006). ²⁶ Saia v. New York , 334 U.S. 558, 562 (1948). ²⁷ Sable Commc’ns of Cal., Inc. v. FCC , 492 U.S. 115, 126 (1989). ²⁸ Heffron v. Internat'l Soc. for Krishna Consciousness, Inc. , 452 U.S. 640, 647 (1981). ²⁹ 47 U.S.C. § 230(c)(1). ³⁰ U.S. v. Alvarez , 567 U.S. 709 (2012). ³¹ Id.; New York Times v. Sullivan , 376 U.S. 254 (1964). ³² Id. ³³ Brandenburg v. Ohio , 395 U.S. 444, 447 (1969). ³⁴ Feiner v. People of State of New York , 30 U.S. 315, 320 (1951). ³⁵ Chaplinsky v. New Hampshire , 315 U. S. 568, 571-572 (1942). ³⁶ Virginia v. Black , 538 U.S. 343, 359-360 (2003). ³⁷ Id. ³⁸ Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n , 447 U.S. 557, 561 (1980). ³⁹ Id ; Langdon v. Google, Inc. , 474 F. Supp. 2d 622, 630 (D. Del. 2007) . ⁴⁰ Langdon at 630; Zhang v. Baidu.com, Inc. , 10 F. Supp. 3d 433, 443 (S.D.N.Y. 2014) (decision to block certain search results was not commercial speech because it related to matters of public concern).
Related Posts
The Limits of Free Speech in Social Media [Infographic]
S U B S C R I B E
Thanks for subscribing!
Accessible Law
- Skip to main content
- Keyboard shortcuts for audio player
Supreme Court tackles social media and free speech
Nina Totenberg
In a major First Amendment case, the Supreme Court heard arguments on the federal government's ability to combat what it sees as false, misleading or dangerous information online.
Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.
Donate to liberties
Tech & rights, free speech on social media: filtering methods, rights, future prospects, yes, our right to free speech absolutely exists online. but there is serious debate about how to regulate our freedom of speech in the online sphere, particularly on social media..
by LibertiesEU
Knowledge is power. Your contribution counts.
The rise of social media has implications for our fundamental rights, perhaps none more so than our freedom of speech. There is no doubt that our right to free speech extends online. But there is considerable and complex debate on how to regulate the online sphere, particularly social media. How the regulations are constructed, where the lines are drawn, will have huge implications for our freedom of speech on social media.
What does free speech mean?
Free speech means you have the freedom to express yourself in any way that does not take away the rights of other people. You can (and should) feel free to criticize the work your elected officials are doing. You should not feel free to hold band practice late into the night, because that could take away your neighbors’ right to privacy. And when they complain about the noise, you can’t encourage people to destroy their property or worse. But up to that point, you’re free to express yourself.
This is why free speech is so central to democracy. Democracy means that everyone in society makes collective decisions about the laws they live under and who administers them. The free exchange of ideas, opinions and information provide us with the knowledge we need to make those decisions. That’s also why free speech and the organs that support it, such as free media and civil society, are often the first things that disappear in autocracies.
We all deserve to have our say
But it is becoming harder to speak up about the issues we care about. Support Liberties standing up for our right to free speech.
Free speech gives us our voice
How free is speech on social media and on the internet in general.
The extent to which someone can freely express themselves online varies from country to country. In the EU, the bloc has laws that protect our freedom to express ourselves online. In some cases, the ease of online speech has allowed it to step far beyond the bounds of free speech – consider online bullying or threats, or the sharing of extremist content or child pornography. These forms of “expression” are not protected speech.
But in other areas, drawing the line is more complicated. The EU has been dealing with how to protect the rights of copyright owners against the right of people to share legal content. Should such an enormous and difficult task be farmed out to AI? Surely some of it must be, but how this is done could have profound implications for free speech.
Liberties has been adamant that compromising free speech, even putting it at potential risk, is a no-go. And that’s how it should be – if we are to err, let it be that not enough of our fundamental right to free speech was limited, and not that we gave too much of it away. That’s why we’ve advocated for users’ free speech during the EU’s work on new copyright law. And why we warned European decision-makers that their plan to regulate online terrorist content might unduly restrict free speech .
We are also mindful of the role online platforms have in determining free speech. Although we may use their services to share our thoughts, there is an obvious danger in making them arbiters of what is and is not free speech. Such decisions need to be made by independent judges, and certainly not by companies with a vested interest in making sure the content they allow and promote is good business for them.
What is important to know about free speech rights on social media?
The rise of social media has given new importance to protecting free speech. People are often able to stay anonymous when they say things – not necessarily a bad thing, especially in places where criticizing the government can put you or your family in danger. Or when you want to seek help for a private medical issue. But social media allows people to use anonymity to bully, harass, intimidate or stalk people.
Social media also gives everyone a platform. Again, this is not an inherently bad thing. It not only allows anyone to share their ideas, but connects us faster and cheaper, allowing us to exchange ideas and create things. But it also gives people the ability to easily spread disinformation that can cause harm both to individuals and society as a whole.
How do social media companies filter speech?
Social media companies can filter speech, and thus limit free speech, by using both humans and artificial intelligence to review content that might not be free to share. They can remove what you share or block you from sharing content lawfully if your content is not protected speech, for instance if you use social media to incite violence against someone. And, of course, social media companies have terms of service that have myriad more causes for sanction. (Although it can be the case that their terms of services can breach the law by limiting lawful content.)
Speaking up starts with getting informed.
Perhaps the most drastic form of social media filtering speech is by blocking some people from using their service at all. This has the effect of limiting the voices that can be heard on a platform. Some would argue that’s a good thing, and this is certainly the case when people have spread hate speech or incited violence. These issues were front and center when a certain former president of the United States was blocked from Twitter and Facebook following the attack on the U.S. Capitol.
What does the future hold for free speech on social media?
It may be a short and disappointing answer, but the truth is that we don’t know what the future holds. There seems to be a consensus that we shouldn’t allow illegal content to be shared on the internet. But it’s easier said than done. Companies, politicians and rights groups all have disagreements about how exactly to do this, and which considerations should be given more weight than others.
Regulating online speech is complicated. But if we leave it up to social media companies and their algorithms, our free speech, and thus our democracy, will suffer. They should use a fraction of their profits to create a complaints system where you can always request human review of a decision to filter content. And, if necessary, anyone should be able to go to a judge to have their case heard.
Help us fight for free speech Donate Social media platforms should not just be held accountable for leaving illegal content online. They should also bear responsibility for taking down legal content. This incentivizes them to create a review system that appropriately considers the free speech of the user. And to ensure that this remains the case, the tech industry must be properly regulated. This ensures that they can continue to grow and prosper without our rights being restricted.
But the truth is, at the moment we don’t really know how their algorithms work. We don’t really know how much material they remove or block, or for what reasons, or how they curate our news feed. To make sure they’re doing their best to protect free speech, all this information has to be available to researchers, authorities and independent watchdogs, like Liberties, who can check on them.
Value knowledge by supporting Liberties All great movements begin with sharing information. Our explainer articles help you understand the most pressing human rights issues, so together we can stand up for what matters. Support us by buying one of our activist authors a cup of coffee. Add your voice to ours. Donate today.
More Stories
Google’s Political Advertising Ban in Europe Will Restrict Political Discourse Online
All good things come in twos
This Human Rights Day, Liberties Reflects On Our Role as a Human Rights Watchdog
Your contribution matters.
As a watchdog organisation, Liberties reminds politicians that respect for human rights is non-negotiable. We're determined to keep championing your civil liberties, will you stand with us? Every donation, big or small, counts.
We’re grateful to all our supporters
Your contributions help us in the following ways
► Liberties remains independent ► It provides a stable income, enabling us to plan long-term ► We decide our mission, so we can focus on the causes that matter ► It makes us stronger and more impactful
Subscribe to stay in
You will get the latest reports before anyone else!
You can follow what we are doing for your rights!
You will know about our achivements!
Show me a sample!
- Student Opportunities
About Hoover
Located on the campus of Stanford University and in Washington, DC, the Hoover Institution is the nation’s preeminent research center dedicated to generating policy ideas that promote economic prosperity, national security, and democratic governance.
- The Hoover Story
- Hoover Timeline & History
- Mission Statement
- Vision of the Institution Today
- Key Focus Areas
- About our Fellows
- Research Programs
- Annual Reports
- Hoover in DC
- Fellowship Opportunities
- Visit Hoover
- David and Joan Traitel Building & Rental Information
- Newsletter Subscriptions
- Connect With Us
Hoover scholars form the Institution’s core and create breakthrough ideas aligned with our mission and ideals. What sets Hoover apart from all other policy organizations is its status as a center of scholarly excellence, its locus as a forum of scholarly discussion of public policy, and its ability to bring the conclusions of this scholarship to a public audience.
- Eric Bettinger
- Thomas Sowell
- Michael McConnell
- Josiah Ober
- Stephen Haber
- Economic Policy Group
- History Working Group
- Hoover Education Success Initiative
- National Security Task Force
- National Security, Technology & Law Working Group
- Middle East and the Islamic World Working Group
- Military History/Contemporary Conflict Working Group
- Renewing Indigenous Economies Project
- State & Local Governance
- Strengthening US-India Relations
- Technology, Economics, and Governance Working Group
- Taiwan in the Indo-Pacific Region
Books by Hoover Fellows
Economics Working Papers
Hoover Education Success Initiative | The Papers
- Hoover Fellows Program
- National Fellows Program
- Student Fellowship Program
- Veteran Fellowship Program
- Congressional Fellowship Program
- Media Fellowship Program
- Silas Palmer Fellowship
- Economic Fellowship Program
Throughout our over one-hundred-year history, our work has directly led to policies that have produced greater freedom, democracy, and opportunity in the United States and the world.
- Determining America’s Role in the World
- Answering Challenges to Advanced Economies
- Empowering State and Local Governance
- Revitalizing History
- Confronting and Competing with China
- Revitalizing American Institutions
- Reforming K-12 Education
- Understanding Public Opinion
- Understanding the Effects of Technology on Economics and Governance
- Energy & Environment
- Health Care
- Immigration
- International Affairs
- Key Countries / Regions
- Law & Policy
- Politics & Public Opinion
- Science & Technology
- Security & Defense
- State & Local
- Books by Fellows
- Published Works by Fellows
- Working Papers
- Congressional Testimony
- Hoover Press
- PERIODICALS
- The Caravan
- Economic Policy
- History Lab
- Hoover Education
- Global Policy & Strategy
- Middle East and the Islamic World
- Military History & Contemporary Conflict
- Renewing Indigenous Economies
- State and Local Governance
- Technology Policy Accelerator
- US, China, and the World
Hoover scholars offer analysis of current policy challenges and provide solutions on how America can advance freedom, peace, and prosperity.
- China Global Sharp Power Weekly Alert
- Email newsletters
- Hoover Daily Report
- Subscription to Email Alerts
- Periodicals
- California on Your Mind
- Defining Ideas
- Hoover Digest
- Video Series
- Uncommon Knowledge
- Battlegrounds
- GoodFellows
- Hoover Events
- Capital Conversations
- Hoover Book Club
- AUDIO PODCASTS
- Matters of Policy & Politics
- Economics, Applied
- Free Speech Unmuted
- Secrets of Statecraft
- China Considered
- Capitalism and Freedom in the 21st Century
- Libertarian
- Library & Archives
Support Hoover
Learn more about joining the community of supporters and scholars working together to advance Hoover’s mission and values.
What is MyHoover?
MyHoover delivers a personalized experience at Hoover.org . In a few easy steps, create an account and receive the most recent analysis from Hoover fellows tailored to your specific policy interests.
Watch this video for an overview of MyHoover.
Log In to MyHoover
Forgot Password
Don't have an account? Sign up
Have questions? Contact us
- Support the Mission of the Hoover Institution
- Subscribe to the Hoover Daily Report
- Follow Hoover on Social Media
Make a Gift
Your gift helps advance ideas that promote a free society.
- About Hoover Institution
- Meet Our Fellows
- Focus Areas
- Research Teams
- Library & Archives
Library & archives
Events, news & press.
Keeping Free Speech Free on Social Media
State statutes to block censorship are a counterweight to government-aided monopoly power.
Keeping Free Speech Free On Social Media
Multiple briefs have been filed in two critical cases, here and here , involving social media platforms (SMPs) that have taken radically different positions over the appropriate level of their First Amendment protection. In Florida ( NetChoice v. Att’y Gen., Fla . (2022)), the Eleventh Circuit offered a ringing defense of the freedom of speech on the grounds that as private enterprises these SMPs enjoy a sovereign right to “curate” and “moderate” the information they publish on their sites. In sharp contrast, in Texas, the Fifth Circuit in NetChoice v. Paxton (2022) insisted that these same actions amounted to “censorship” that the state must regulate to ensure that rival views can receive full public exposure.
The origins of this deep clash were two identical statutes targeted at large SMPs—the only ones that are capable of exerting market power. Thus, Section 7 of H. B. 20 reads as follows:
A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: the viewpoint of the user or another person; the viewpoint represented in the user’s expression or another person’s expression[.]
To set the stage, it is accepted on both sides of the controversy that the SMPs always have the power to remove from their platforms smut, obscenity, and calls for violence, just as the common carriers—most notably railroads and telephone companies—have long had the right to deny service for cause to persons who engage in these activities. They may so act, even though, as carriers, they are charged with the basic duty to take all comers for fair, reasonable, and nondiscriminatory rates (FRAND) that strip them of the ability to make a monopoly profit but allow them to charge fees sufficient to cover their costs and to make a reasonable profit. In these SMP cases, the issue of profits drops out because the relevant services are supplied for free, so the real question is whether these platforms can engage in viewpoint discrimination.
The defenders of NetChoice, a lobbying organization for large tech and social media companies, insist that both the Texas and Florida laws violate “the fundamental rule of . . . the First Amendment, that a speaker has the autonomy to choose the content of his own message.” I wrote an amicus brief for the Center for Renewing America that takes the opposite position, and urged that these regulations be held a permissible and reasonable response to the monopoly power that these SMPs wield in today’s platform economy. That power makes the earlier cases relied on by NetChoice inapposite to the new challenges today.
The proper view of a test for content neutrality depends critically on the shape of the market. If the government or a private monopolist sought to suppress the view of all other parties, there would be a serious challenge to the fundamental premise of spirited debate over matters of great social importance which lies at the heart of the First Amendment. The famous statement of Justice Oliver Wendell Holmes in Abrams v. United States (1919) that “the best test of truth is the power of the thought to get itself accepted in the competition of the market” summarizes the basic view. But that promise cannot be realized if the market has only one dominant player, or if the multiple players in the market can organize themselves by a variety of devices so that they present a united front on platform content to the rest of the world.
The cases argued by NetChoice date from 1974, 1986, and 1995—before the modern platform era—and thus they do not address this issue. The first key case is Miami Herald Publishing Co. v. Tornillo (1974), which held that “a state statute granting a political candidate a right to equal space to reply to criticism and attacks on his record by a newspaper violates the guarantees of a free press.” The logic here was that free and inexpensive entry into the market was available from multiple other sources, so that there was no reason to guarantee space on a particular platform to political candidates, or indeed to anyone else. In contrast, the Miami Herald court attributed to Associated Press v. United States (1945) the holding that the antitrust laws applied with full vigor to the press, given the risk whenever a dominant firm “hammers away on one ideological or political line using its monopoly position not to educate people, not to promote debate,” which is exactly the situation here.
Next, Pacific Gas & Electric Co. v. Public Utilities Commission of California (1986) raised the prosaic question of whether the PUC could force PG&E to carry messages in the “extra space” in its billing envelopes. The Supreme Court found a violation of the First Amendment because of compelled speech, and found that under the Board of Utility Commissions v. New York Telephone Company (1926), the public did not own the assets of PG&E. They were not part owners of the firm, but mere customers who “pay for service, not for the property used to render it.” No risk of monopoly stemmed from PG&E’s use of its own property.
Last, in Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc. (1995), the South Boston Allied War Veterans Council was not required to allow the GLIB group to march under its own banner in its St. Patrick’s Day Parade, because the parade organizers were under no duty “to include among the marchers a group imparting a message the organizers d[id] not wish to convey.” The control of one’s own parade is no exercise of monopoly power, especially since the excluded groups have an equal right to run their own parade on public property.
The issue of monopoly power surges to the fore in three separate ways in the NetChoice cases. The first is through the exertion of “ network effects ,” which effectively mean that new consumers tend to flock to an established network where they can maximize their number of contacts. Accordingly, dominant players like Amazon, X (Twitter), Google, and Apple have a powerful first mover advance that helps cement this dominant position—until some other new entrant displaces them to enjoy the benefit of these network effects itself. Smaller companies with divergent view are thus consigned to the fringe, where they do not provide a credible offer of a reply that reaches the same audience as the dominant carrier.
The situation is even worse because the operation of a minor network depends critically on its ability to operate at all on the connection that it receives from these key operators: Amazon Web Services , Apple Web Services , and Google Cloud Web Hosting . Yet these large services can cut off their back-end support, putting them out of business. The conservative social media platform Parler was gaining customers until the events of January 6, 2021, when it was cut off from all three enterprises without notice, on the ground that it facilitated violence. The three web services acted in parallel and may well have consulted with each other on key decisions. The new environment bears no resemblance to the lone voice of the Miami Herald. No one knows whether they applied consistent policies to other groups, or why they imposed a maximum sanction on this occasion. But it was hardly the case that all parties on Parler engaged in violence, so these decisions look like the naked efforts of dominant platforms to revoke the public access from their political opponents. That risk is so great that no new entrant has sought to occupy Parler’s market niche.
That inference is, moreover, virtually inescapable in light of the extensive revelations in Missouri v. Biden (2022). This action was brought by, among others, two prominent COVID-19 researchers, Jay Bhattacharya of Stanford and Martin Kulldorff, on leave from Harvard University, whose access to social media platforms was blocked in large measure because of the interventions of federal officials who used a combination of coercion and threats to prevent them from spreading “misinformation” to the public by contesting federal policies about quarantines, masks, and vaccines. The officials threatened this adverse government action even though the criticized policies had the possibility of causing immense harm as well as providing great benefits. It is wholly proper for the government to speak its own mind in defense of such policies, but utterly indefensible for it to violate the Holmes maxim in Abrams by suppressing positions that oppose it, thereby showing the dangers of government monopoly.
People should be allowed to make up their own minds, which is why statutes like Texas’s H. B. 20 expand discussion by defending the right to reply against today’s social media platforms, whose abuse of monopoly power is wrongly aided and abetted by the federal government.
View the discussion thread.
Join the Hoover Institution’s community of supporters in ideas advancing freedom.
The Evolving Free-Speech Battle Between Social Media and the Government
Earlier this month, a federal judge in Louisiana issued a ruling that restricted various government agencies from communicating with social-media companies. The plaintiffs, which include the attorneys general of Missouri and Louisiana, argued that the federal government was coercing social-media companies into limiting speech on topics such as vaccine skepticism. The judge wrote, in a preliminary injunction, “If the allegations made by plaintiffs are true, the present case arguably involves the most massive attack against free speech in United States’ history. The plaintiffs are likely to succeed on the merits in establishing that the government has used its power to silence the opposition.” The injunction prevented agencies such as the Department of Health and Human Services and the F.B.I. from communicating with Facebook , Twitter, or other platforms about removing or censoring content. (The Biden Administration appealed the injunction and, on Friday, the Fifth Circuit paused it. A three-judge panel will soon decide whether it will be reinstated as the case proceeds.) Critics have expressed concern that such orders will limit the ability of the government to fight disinformation.
To better understand the issues at stake, I recently spoke by phone with Genevieve Lakier, a professor of law at the University of Chicago Law School who focusses on issues of social media and free speech. (We spoke before Friday’s pause.) During our conversation, which has been edited for length and clarity, we discussed why the ruling was such a radical departure from the way that courts generally handle these issues, how to apply concepts like free speech to government actors, and why some of the communication between the government and social-media companies was problematic.
In a very basic sense, what does this decision actually do?
Well, in practical terms, it prevents a huge swath of the executive branch of the federal government from essentially talking to social-media platforms about what they consider to be bad or harmful speech on the platforms.
There’s an injunction and then there’s an order, and both are important. The order is the justification for the injunction, but the injunction itself is what actually has effects on the world. And the injunction is incredibly broad. It says all of these defendants—and we’re talking about the President, the Surgeon General, the White House press secretary, the State Department, the F.B.I.—may not urge, encourage, pressure, or induce in any manner the companies to do something different than what they might otherwise do about harmful speech. This is incredibly broad language. It suggests, and I think is likely to be interpreted to mean, that, basically, if you’re a member of one of the agencies or if you’re named in this injunction, you just cannot speak to the platforms about harmful speech on the platform until, or unless, the injunction ends.
But one of the puzzling things about the injunction is that there are these very significant carve-outs. For example, my favorite is that the injunction says, basically, “On the other hand, you may communicate with the platforms about threats to public safety or security of the United States.” Now, of course, the defendants in the lawsuit would say, “That’s all we’ve been doing. When we talk to you, when we talk to the platforms about election misinformation or health misinformation, we are alerting them to threats to the safety and security of the United States.”
So, read one way, the injunction chills an enormous amount of speech. Read another way, it doesn’t really change anything at all. But, of course, when you get an injunction like this from a federal court, it’s better to be safe than sorry. I imagine that all of the agencies and government officials listed in the injunction are going to think, We’d better shut up.
And the reason that specific people, jobs, and agencies are listed in the injunction is because the plaintiffs say that these entities were communicating with social-media companies, correct?
Correct. And communicating in these coercive or harmful, unconstitutional ways. The presumption of the injunction is that if they’ve been doing it in the past, they’re probably going to keep doing it in the future. And let’s stop continuing violations of the First Amendment.
As someone who’s not an expert on this issue, I find the idea that you could tell the White House press secretary that he or she cannot get up at the White House podium and say that Twitter should take down COVID misinformation—
Does this injunction raise issues on two fronts: freedom of speech and separation of powers?
Technically, when the press secretary is operating as the press secretary, she’s not a First Amendment-rights holder. The First Amendment limits the government, constrains the government, but protects private people. And so when she’s a private citizen, she has all her ordinary-citizen rights. Government officials technically don’t have First Amendment rights.
That said, it’s absolutely true that, when thinking about the scope of the First Amendment, courts take very seriously the important democratic and expressive interests in government speech. And so government speakers don’t have First Amendment rights, but they have a lot of interests that courts consider. A First Amendment advocate would say that this injunction constrains and has negative effects on really important government speech interests.
More colloquially, I would just say the irony of this injunction is that in the name of freedom of speech it is chilling a hell of a lot of speech. That is how complicated these issues are. Government officials using their bully pulpit can have really powerful speech-oppressive effects. They can chill a lot of important speech. But one of the problems with the way the district court approaches the analysis is that it doesn’t seem to be taking into account the interest on the other side. Just as we think that the government can go too far, we also think it’s really important for the government to be able to speak.
And what about separation-of-powers issues? Or is that not relevant here?
I think the way that the First Amendment is interpreted in this area is an attempt to protect some separation of powers. Government actors may not have First Amendment rights, but they’re doing important business, and it’s important to give them a lot of freedom to do that business, including to do things like express opinions about what private citizens are doing or not doing. Courts generally recognize that government actors, legislators, and executive-branch officials are doing important business. The courts do not want to second-guess everything that they’re doing.
So what exactly does this order say was illegal?
The lawsuit was very ambitious. It claimed that government officials in a variety of positions violated the First Amendment by inducing or encouraging or incentivizing the platforms to take down protected speech. And by coercing or threatening them into taking down protected speech. And by collaborating with them to take down protected speech. These are the three prongs that you can use in a First Amendment case to show that the decision to take down speech that looks like it’s directly from a private actor is actually the responsibility of the government. The plaintiffs claimed all three. What’s interesting about that district-court order is that it agreed with all three. It says, Yeah, there was encouragement, there was coercion, and there was joint action or collaboration.
And what sort of examples are they providing? What would be an example of the meat of what the plaintiffs argued, and what the judge found to violate the First Amendment?
A huge range of activities—some that I find troubling and some that don’t seem to be troubling. Public statements by members of the White House or the executive branch expressing dissatisfaction with what the platforms are doing. For instance, President Biden’s famous statement that the platforms are killing people. Or the Surgeon General’s warning that there is a health crisis caused by misinformation, and his urging the platforms to do something about it. That’s one bucket.
There is another bucket in which the platforms were going to agencies like the C.D.C. to ask them for information about the COVID pandemic and the vaccine—what’s true and what’s false, or what’s good and what’s bad information—and then using that to inform their content-moderation rules.
Very different and much more troubling, I think, are these e-mails that they found in discovery between White House officials and the platforms in which the officials more or less demand that the platforms take down speech. There is one e-mail from someone in the White House who asked Twitter to remove a parody account that was linked to President Biden’s granddaughter, and said that he “cannot stress the degree to which this needs to be resolved immediately”—and within forty-five minutes, Twitter takes it down. That’s a very different thing than President Biden saying, “Hey, platforms, you’re doing a bad job with COVID misinformation.”
The second bucket seems full of the normal give-and-take you’d expect between the government and private actors in a democratic society, right?
Yeah. Threats and government coercion on private platforms seem the most troubling from a First Amendment perspective. And traditionally that is the kind of behavior that these cases have been most worried about.
This is not the first case to make claims of this kind. This is actually one of dozens of cases that have been filed in federal court over the last years alleging that the Biden Administration or members of the government had put pressure on or encouraged platforms to take down vaccine-skeptical speech and speech about election misinformation. What is unusual about this case is the way that the district court responded to these claims. Before this case, courts had, for the most part, thrown these cases out. I think this was largely because they thought that there was insufficient evidence of coercion, and coercion is what we’re mostly worried about. They have found that this kind of behavior only violates the First Amendment if there is some kind of explicit threat, such as “If you don’t do X, we will do Y,” or if the government actors have been directly involved in the decision to take down the speech.
In this case, the court rejects that and has a much broader test, where it says, basically, that government officials violate the First Amendment if they significantly encourage the platforms to act. And that may mean just putting pressure on them through rhetoric or through e-mails on multiple occasions—there’s a campaign of pressure, and that’s enough to violate the First Amendment. I cannot stress enough how significant a departure that is from the way courts have looked at the issue before.
So, in this case, you’re saying that the underlying behavior may constitute something bad that the Biden Administration did, that voters should know about it and judge them on it, but that it doesn’t rise to the level of being a First Amendment issue?
Yes. I think that this opinion goes too far. It’s insufficiently attentive to the interests on the other side. But I think the prior cases have been too stingy. They’ve been too unwilling to find a problem—they don’t want to get involved because of this concern with separation of powers.
The platforms are incredibly powerful speech regulators. We have largely handed over control of the digital public sphere to these private companies. I think there is this recognition that when the government criticizes the platforms or puts pressure on the platforms to change their policies, that’s some form of political or democratic oversight, a way to promote public welfare. And those kinds of democratic and public-welfare concerns are pretty significant. The courts have wanted to give the government a lot of room to move.
But you think that, in the past, the courts have been too willing to give the government space? How could they develop a better approach?
Yeah. So, for example, the e-mails that are identified in this complaint—I think that’s the kind of pressure that is inappropriate for government actors in a democracy to be employing against private-speech platforms. I’m not at all convinced that, if this had come up in a different court, those would have been found to be a violation of the First Amendment. But there need to be some rules of the road.
On the one hand, I was suggesting that there are important democratic interests in not having too broad a rule. But, on the other hand, I think part of what’s going on here—part of what the facts that we see in this complaint are revealing—is that, in the past, we’ve thought about this kind of government pressure on private platforms, which is sometimes called jawboning, as episodic. There’s a local sheriff or there’s an agency head who doesn’t like a particular policy, and they put pressure on the television station, or the local bookseller, to do something about it. Today, what we’re seeing is that there’s just this pervasive, increasingly bureaucratized communication between the government and the platforms. The digital public theatre has fewer gatekeepers; journalists are not playing the role of leading and determining the news that is fit to print or not fit to print. And so there’s a lot of stuff, for good or for ill, that is circulating in public. You can understand why government officials and expert agencies want to be playing a more significant role in informing, influencing, and persuading the platforms to operate one way or the other. But it does raise the possibility of abuse, and I’m worried about that.
That was a fascinating response, but you didn’t totally answer the question. How should a court step in here without going too far?
The traditional approach that courts have taken, until now, has been to say that there’s only going to be a First Amendment violation if the coercion, encouragement, or collaboration is so strong that, essentially, the platform had no choice but to act. It had no alternatives; there was no private discretion. Because then we can say, Oh, yes, it was the government actor, not the platform, that ultimately was responsible for the decision.
I think that that is too restrictive a standard. Platforms are vulnerable to pressure from the government that’s a lot less severe. They’re in the business of making money by disseminating a lot of speech. They don’t particularly care about any particular tweet or post or speech act. And their economic incentives will often mean that they want to curry favor with the government and with advertisers by being able to continue to circulate a lot of speech. If that means that they have to break some eggs, that they have to suppress particular kinds of posts or tweets, they will do that. It’s economically rational for them to do so.
The challenge for courts is to develop rules of the road for how government officials can interact with platforms. It has to be the case that some forms of communication are protected, constitutionally O.K., and even democratically good. I want expert agencies such as the C.D.C. to be able to communicate to the platforms. And I want that kind of expert information to be constitutionally unproblematic to deliver. On the other hand, I don’t think that White House officials should be writing to platforms and saying, “Hey, take this down immediately.”
I never thought about threatening companies as a free-speech issue that courts would get involved with. Let me give you an example. If you had told me four years ago that the White House press secretary had got up and said, “I have a message from President Trump. If CNN airs one more criticism of me, I am going to try and block its next merger,” I would’ve imagined that there would be a lot of outrage about that. What I could not have imagined was a judge releasing an injunction saying that people who worked for President Trump were not allowed to pass on the President’s message from the White House podium. It would be an issue for voters to decide. Or, I suppose, CNN, during the merger decision, could raise the issue and say, “See, we didn’t get fair treatment because of what President Trump said,” and courts could take that into account. But the idea of blocking the White House press secretary from saying anything seems inconceivable to me.
I’ll say two things in response. One is that there is a history of this kind of First Amendment litigation, but it’s usually about private speech. We might think that public speech has a different status because there is more political accountability. I don’t know. I find this question really tricky, because I think that the easiest cases from a First Amendment perspective, and the easiest reason for courts to get involved, is when the communication is secret, because there isn’t political accountability.
You mentioned the White House press secretary saying something in public. O.K., that’s one thing. But what about if she says it in private? We might think, Well, then the platforms are going to complain. But often regulated parties do not want to say that they have been coerced by the government into doing something against their interests, or that they were threatened. There’s often a conspiracy of silence.
In those cases, it doesn’t seem to me as if there’s democratic accountability. But, even when it is public, we’ve seen over the past year that government officials are writing letters to the platforms: public letters criticizing them, asking for information, badgering them, pestering them about their content-moderation policies. And we might think, Sure, people know that that’s happening. Maybe the government officials will face political accountability if it’s no good. But we might worry that, even then, if the behavior is sufficiently serious, if it’s repeated, it might give the officials too much power to shape the content-moderation policies of the platforms. From a First Amendment perspective, I don’t know why that’s off the table.
Now, from a practical perspective, you’re absolutely right. Courts have not wanted to get involved. But that’s really worrying. I think this desire to just let the political branches work it out has meant that, certainly with the social-media platforms, it’s been like the Wild West. There are no rules of the road. We have no idea what’s O.K. or not for someone in the White House to e-mail to a platform. One of the benefits of the order and the injunction is that it’s opening up this debate about what’s O.K. and what’s not. It might be the case that the way to establish rules of the road will not be through First Amendment-case litigation. Maybe we need Congress to step in and write the rules, or there needs to be some kind of agency self-regulation. But I think it’s all going to have to ultimately be viewed through a First Amendment lens. This order and injunction go way too far, but I think the case is at least useful in starting a debate. Because up until now we’ve been stuck in this arena where there are important free-speech values that are at stake and no one is really doing much to protect them. ♦
More New Yorker Conversations
Jonathan Haidt wants you to take away your kid’s phone .
Daniel Craig’s masculine constructs .
Susan Seidelman knows what it’s like to be in “ movie jail .”
Helen Oyeyemi thinks we should read more and stay in touch less .
Jon Ronson’s guide to the culture wars .
Rachel Bloom has a funny song about death .
Support The New Yorker’s award-winning journalism. Subscribe today .
- Share full article
Advertisement
Supported by
Supreme Court to Decide How the First Amendment Applies to Social Media
Challenges to laws in Florida and Texas meant to protect conservative viewpoints are likely to yield a major constitutional ruling on tech platforms’ free speech rights.
By Adam Liptak
Reporting from Washington
The most important First Amendment cases of the internet era, to be heard by the Supreme Court on Monday, may turn on a single question: Do platforms like Facebook, YouTube, TikTok and X most closely resemble newspapers or shopping centers or phone companies?
The two cases arrive at the court garbed in politics, as they concern laws in Florida and Texas aimed at protecting conservative speech by forbidding leading social media sites from removing posts based on the views they express.
But the outsize question the cases present transcends ideology. It is whether tech platforms have free speech rights to make editorial judgments. Picking the apt analogy from the court’s precedents could decide the matter, but none of the available ones is a perfect fit.
If the platforms are like newspapers, they may publish what they want without government interference. If they are like private shopping centers open to the public, they may be required to let visitors say what they like. And if they are like phone companies, they must transmit everyone’s speech.
“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” Justice Samuel A. Alito Jr. wrote in a 2022 dissent when one of the cases briefly reached the Supreme Court.
Supporters of the state laws say they foster free speech, giving the public access to all points of view. Opponents say the laws trample on the platforms’ own First Amendment rights and would turn them into cesspools of filth, hate and lies. One contrarian brief , from liberal professors, urged the justices to uphold the key provision of the Texas law despite the harm they said it would cause.
What is clear is that the court’s decision, expected by June, could transform the internet.
“It is difficult to overstate the importance of these cases for free speech online,” said Scott Wilkens, a lawyer with the Knight First Amendment Institute at Columbia University, which filed a friend-of-the-court brief in support of neither side in the two cases, saying each had staked out an extreme position.
The cases concern laws enacted in 2021 in Florida and Texas aimed at prohibiting major platforms from removing posts expressing conservative views. They differed in their details but were both animated by frustration on the right, notably the decisions of some platforms to bar President Donald J. Trump after the Jan. 6, 2021, attack on the Capitol.
In a statement issued when he signed the Florida bill, Gov. Ron DeSantis, a Republican, said the law was meant to promote right-leaning viewpoints. “If Big Tech censors enforce rules inconsistently, to discriminate in favor of the dominant Silicon Valley ideology, they will now be held accountable,” he said.
Gov. Greg Abbott of Texas, also a Republican, said much the same thing when he signed his state’s bill. “It is now law,” he said, “that conservative viewpoints in Texas cannot be banned on social media.”
The two trade groups that challenged the laws — NetChoice and the Computer & Communications Industry Association — said the platforms had the same First Amendment rights as conventional news outlets.
“Just as Florida may not tell The New York Times what opinion pieces to publish or Fox News what interviews to air,” the groups told the justices , “it may not tell Facebook and YouTube what content to disseminate. When it comes to disseminating speech, decisions about what messages to include and exclude are for private parties — not the government — to make.”
The states took the opposite position. The Texas law, Ken Paxton, the state’s attorney general, wrote in a brief , “just enables voluntary communication on the world’s largest telecommunications platforms between speakers who want to speak and listeners who want to listen, treating the platforms like telegraph or telephone companies.”
The two laws met different fates in the lower courts.
In the Texas case, a divided three-judge panel of the U.S. Court of Appeals for the Fifth Circuit reversed a lower court’s order blocking the state’s law.
“We reject the platforms’ attempt to extract a freewheeling censorship right from the Constitution’s free speech guarantee,” Judge Andrew S. Oldham wrote for the majority. “The platforms are not newspapers. Their censorship is not speech.”
In the Florida case, the 11th Circuit largely upheld a preliminary injunction blocking the state’s law.
“Social media platforms exercise editorial judgment that is inherently expressive,” Judge Kevin C. Newsom wrote for the panel. “When platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First Amendment-protected activity.”
Forcing social media companies to transmit essentially all messages, their representatives told the justices , “would compel platforms to disseminate all sorts of objectionable viewpoints — such as Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or K.K.K. screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders.”
Supporting briefs mostly divided along the predictable lines. But there was one notable exception. To the surprise of many, some prominent liberal professors filed a brief urging the justices to uphold a key provision of the Texas law.
“There are serious, legitimate public policy concerns with the law at issue in this case,” wrote the professors, including Lawrence Lessig of Harvard, Tim Wu of Columbia and Zephyr Teachout of Fordham. “They could lead to many forms of amplified hateful speech and harmful content.”
But they added that “bad laws can make bad precedent” and urged the justices to reject the platforms’ plea to be treated as news outlets.
“To put a fine point on it: Facebook, Twitter, Instagram and TikTok are not newspapers,” the professors wrote. “They are not space-limited publications dependent on editorial discretion in choosing what topics or issues to highlight. Rather, they are platforms for widespread public expression and discourse. They are their own beast, but they are far closer to a public shopping center or a railroad than to The Manchester Union Leader.”
In an interview, Professor Teachout linked the Texas case to the Citizens United decision , which struck down a campaign finance law regulating corporate spending on First Amendment grounds.
“This case threatens to be another expansion of corporate speech rights,” she said. “It may end up in fact being a Trojan horse, because the sponsors of the legislation are so distasteful. We should be really wary of expanding corporate speech rights just because we don’t like particular laws.”
Other professors, including Richard L. Hasen of the University of California, Los Angeles, warned the justices in a brief supporting the challengers that prohibiting the platforms from deleting political posts could have grave consequences.
“Florida’s and Texas’ social media laws, if allowed to stand,” the brief said, “would thwart the ability of platforms to moderate social media posts that risk undermining U.S. democracy and fomenting violence.”
The justices will consult two key precedents in trying to determine where to draw the constitutional line in the cases to be argued Monday, Moody v. NetChoice , No. 22-277, and NetChoice v. Paxton , No. 22-555.
One of them, Pruneyard Shopping Center v. Robins from 1980, concerned a sprawling private shopping center in Campbell, Calif., whose 21 acres included 65 shops, 10 restaurants and a movie theater. It was open to the public but did not permit, as Justice William H. Rehnquist put it in his opinion for the court, “any publicly expressive activity, including the circulation of petitions, that is not directly related to its commercial purposes.”
That policy was challenged by high school students who opposed a U.N. resolution against Zionism and were stopped from handing out pamphlets and seeking signatures for a petition.
Justice Rehnquist, who would be elevated to chief justice in 1986, wrote that state constitutional provisions requiring the shopping center to allow people to engage in expressive activities on its property did not violate the center’s First Amendment rights.
In the second case, Miami Herald v. Tornillo , the Supreme Court in 1974 struck down a Florida law that would have allowed politicians a “right to reply” to newspaper articles critical of them.
The case was brought by Pat L. Tornillo, who was unhappy about colorful editorials in The Miami Herald opposing his candidacy for the Florida House of Representatives. The newspaper said Mr. Tornillo, a labor union official, had engaged in “shakedown statesmanship.”
Chief Justice Warren E. Burger, writing for a unanimous court in striking down the law, said the nation was in the middle of “vast changes.”
“In the past half century,” he wrote, “a communications revolution has seen the introduction of radio and television into our lives, the promise of a global community through the use of communications satellites and the specter of a ‘wired’ nation.”
But Chief Justice Burger concluded that “the vast accumulations of unreviewable power in the modern media empire” did not permit the government to usurp the role of editors in deciding what ought to be published.
“A responsible press is an undoubtedly desirable goal,” he wrote, “but press responsibility is not mandated by the Constitution, and like many other virtues it cannot be legislated.”
Adam Liptak covers the Supreme Court and writes Sidebar, a column on legal developments. A graduate of Yale Law School, he practiced law for 14 years before joining The Times in 2002. More about Adam Liptak
Our Coverage of the Supreme Court
TikTok Sale-or-Ban Law: The Supreme Court agreed to hear TikTok’s challenge to a law that requires its Chinese parent company to sell the app or face a ban in the United States as of Jan. 19, putting the case on an exceptionally fast track.
Confidence in Courts Plummets: Public confidence in the U.S. legal system has plunged over the past four years , a new poll found. Many surveys have found steep drops in public approval of the Supreme Court since the overturning Roe.
Transgender Rights: The court’s conservative majority seemed ready to uphold a Tennessee law denying transition care to transgender youth . The ruling could set a precedent for several challenges to state laws .
A Century-Old Law’s Aftershocks: In 1925, Congress let the justices choose the cases they would decide . A scholar argues that change “continues to prompt political contention and crisis.”
Who Judges the Justices?: In private meetings and memos, the justices made new ethics rules for themselves — then split on whether they should be enforced.
IMAGES